Wyndham Data Breach Ruling Cleared for Potential Appeal to Third Circuit

COV_cmyk_C

 

U.S. District Court Judge Esther Salas ruled on Monday that the U.S. Court of Appeals for the Third Circuit can review her conclusion that Section 5 of the Federal Trade Commission Act provides the FTC with authority to bring actions arising from companies’ data security violations.

In April of this year, Judge Salas denied Wyndham Hotels and Resorts’ motion to dismiss a FTC lawsuit that alleges that Wyndham violated the FTC Act’s prohibition against “unfair practices” by failing to provide reasonable security for its customers’ personal information. Although her order is not a final ruling and is not binding on any other judge, it received considerable attention because it was the first time that a court has weighed in on the scope of the FTC’s authority over data security and privacy matters.

Denials of motions to dismiss ordinarily are not immediately appealable, absent permission from both the district court and the court of appeals.  In her ruling on Monday, Judge Salas granted Wyndham’s motion to appeal her order to the Third Circuit.  Judge Salas reasoned that there is substantial grounds for differences of opinion on two issues: (1) whether the FTC can bring a Section 5 unfairness claim involving data security; and (2) whether the FTC must formally promulgate regulations before bringing its unfairness claim.

If the Third Circuit grants Wyndham’s Petition to Appeal, the appellate court will review the legal conclusions in Judge Salas’s April order.  If the Third Circuit denies the petition, the case will proceed in the district court.  Even if the Third Circuit denies this petition for review, it ultimately may hear an appeal of the outcome of summary judgment proceedings or a trial in this case.

Article By:

Of:

The White House Big Data Report & Apple’s iOS 8: Shining the Light on an Alternative Approach to Privacy and Biomedical Research

DrinkerBiddle

Big data derives from “the growing technological ability to capture, aggregate, and process an ever-greater volume, velocity, and variety of data.”[i] Apple’s just-releasediOS 8 software development kit (“iOS 8 SDK”) highlights this growth.[ii] The iOS 8 SDK touts over 4,000 application programming interface calls including “greater extensibility” and “new frameworks.”iii For example, HomeKit and HealthKit, two of these new frameworks, serve as hubs for data generated by other applications and provide user interfaces to manage that data and related functionality.[iv] HealthKit’s APIs “provide the ability for health and fitness apps to communicate with each other … to provide a more comprehensive way to manage your health and fitness.”[v] HomeKit integrates home automation functions in a central location within the iOS device, allowing users to lock/unlock doors, turn on/off cameras, change or view thermostat settings, turn lights on/off, open garage doors and more – all from a single app.[vi] The iOS 8 SDK will inevitably lead to the development of countless apps and other technologies that “capture, aggregate, and process an ever-greater volume, velocity, and variety of data,” contributing immense volumes of data to the already-gargantuan big data ecosystem.

In the context of our health and wellbeing, big data – which includes, but is definitely not limited to, data generated by future iOS 8-related technologies – has boundless potential and can have a momentous impact on biomedical research, leading to new therapies and improved health outcomes. The big data reports recently issued by the White House and the President’s Council of Advisors on Science and Technology (“PCAST”) echo this fact. However, these reports also emphasize the challenges posed by applying the current approach to privacy to big data, including the focus on notice and consent.

After providing some background, this article examines the impact of big data on medical research. It then explores the privacy challenges posed by focusing on notice and consent with respect to big data. Finally, this article describes an alternative approach to privacy suggested by the big data reports and its application to biomedical research.

Background

On May 1, 2014, the White House released its report on big data, “Big Data: Seizing Opportunities, Preserving Values” (“WH Report”). The WH Report was supported by a separate effort and report produced by PCAST, “Big Data and Privacy: A Technological Perspective” (“PCAST Report”).[vii] The privacy implications of the eports on biomedical research – an area where big data can arguably have the greatest impact – are significant.

Notice and consent provide the foundation upon which privacy laws are built. Accordingly, it can be difficult to envision a situation where these conceptual underpinnings, while still important, begin to yield to a new approach. However, that is exactly what the reports suggest in the context of big data. As HealthKit and iOS 8 SDK demonstrate, we live in a world where health data is generated in numerous ways, both inside and outside of the traditional patient-doctor relationship. If given access to all this data, researchers can better analyze the effectiveness of existing therapies, develop new therapies faster, and more accurately predict and suggest measures to avoid the onset of disease, all leading to improved health outcomes. However, existing privacy laws often restrict researchers’ access to such data without first soliciting and obtaining proof of appropriate notice and consent.[viii] Focusing on individual notice and consent in some instances can be unnecessarily restrictive and can stall the discovery and development of new therapies. This is exacerbated by the fact that de-identification (or pseudonymization) – a process typically relied upon to alleviate some of these obstacles – is losing its effectiveness or would require stripping data of much meaningful value. Recognizing these flaws, the WH Report suggests a new approach where the focus is taken off of the collection of data and turned to the ways in which parties, including biomedical researchers, use data – an approach that allows researchers to maximize the possibilities of big data, while protecting individual privacy and ensuring that data is processed in a reasonable way.

The Benefits of Big Data to Biomedical Research

Before discussing why a new approach to privacy in the context of big data and biomedical research may be necessary, it is first important to understand the role of big data in research. As noted, the concept of big data encompasses “the growing technological ability to capture, aggregate, and process an ever-greater volume, velocity, and variety of data.”[ix] The word “growing” is essential here, as the sources of data contributing to the big data ecosystem are extensive and will continue to expand, especially as Internet-enabled devices such as those contemplated by HomeKit continue to develop.[x] These sources include not only the traditional doctor-patient relationship, but also consumer-generated and other non-traditional sources of health data such as those contemplated by HealthKit, including wearable technologies (e.g., Fitbit), patient-support sites (e.g., PatientsLikeMe.com), wellness programs, electronic/personal health records, etc. These sources expand even further when non-health data is combined with lifestyle and financial data.[xi]

The WH Report recognizes that these new abilities to collect and process information have the potential to bring about “unexpected … advancements in our quality of life.”[xii] The ability of researchers to analyze this vast amount of data can help “identify clinical treatments, prescription drugs, and public health interventions that may not appear to be effective in smaller samples, across broad populations, or using traditional research methods.”[xiii] In some instances, big data can in fact be the necessary component of a life-changing discovery.[xiv]

Further, the WH Report finds that big data holds the key to fully realizing the promise of predictive medicine, whereby doctors and researchers can fully analyze an individual’s health status and genetic information to better predict the onset of disease and/or how an individual might respond to specific therapies.[xv] These findings have the ability to affect not only particular patients but also family members and others with a similar genetic makeup.[xvi] It is worth noting that the WH Report highlights bio-banks and their role in “confronting important questions about personal privacy in the context of health research and treatment.”[xvii]

In summary, big data has a profound impact on biomedical research and, as a necessary result, on those that benefit from the fruits of researchers’ labor. The key to its realization is a privacy regime that can unlock for researchers vast amounts of different types of data obtained from diverse sources.

Problems With the Current Approach

Where the use of information is not directly regulated by the existing privacy framework, providing consumers with notice and choice regarding the processing of their personal information has become the de facto rule. Where the collection and use of information is specifically regulated (e.g., HIPAA, FCRA, etc.), notice and consent is required whenever information is used or shared in a way not permitted under the relevant statute. For example, under HIPAA, a doctor can disclose a patient’s personal health information for treatment purposes (permissible use) but would need to provide the patient with notice and obtain consent before disclosing the same information for marketing purposes (impermissible use). To avoid this obligation, entities seeking to share data in a way not described in the privacy notice and/or permitted under applicable law can de-identify the data, to purportedly make the data anonymous (for example, John Smith drives a white Honda and makes $55,000/year (identified) v. Person X drives a white Honda and makes $55,000/year (de-identified)).[xviii] Except under very limited circumstances (e.g., HIPAA limited data sets), the requirements regarding notice and consent apply equally to biomedical research as to more commercial uses.

In the context of big data, the first problem with notice and consent is that it places an enormous burden on the individual to manage all of the relevant privacy notices applicable to the processing of that individual’s data. In other words, it requires individuals to analyze each and every privacy notice applicable to them (which could be hundreds, if not more), determine whether those data collectors share information and with whom, and then attempt to track that information down as necessary. As the PCAST Report not-so-delicately states, “[i]n some fantasy world, users actually read these notices, understand their legal implications (consulting their attorneys if necessary), negotiate with other providers of similar services to get better privacy treatment, and only then click to indicate their consent. Reality is different.”[xix] This is aggravated by the fact that relevant privacy terms are often buried in privacy notices using legalese and provided on a take-it-or-leave-it basis.[xx] Although notice and consent may still play an important role where there is a direct connection between data collectors and individuals, it is evident why such a model loses its meaning when information is collected from a number of varied sources and those analyzing the data have no direct relationship with individuals.

Second, even where specific privacy regulations apply to the collection and use of personal information, such rules rarely consider or routinely allow for the disclosure of that information to researchers for biomedical research purposes, thus requiring researchers to independently provide notice and obtain consent. As the WH Report points out, “[t]he privacy frameworks that currently cover information now used in health may not be well suited to … facilitate the research that drives them.”[xxi] And as previously noted, often times biomedical researchers require non-health information, including lifestyle and financial data, if they want to maximize the benefits of big data. “These types of data are subjected to different and sometimes conflicting federal and state regulation,” if any regulation at all.[xxii]

Lastly, the ability to overcome de-identification is becoming easier due to “effective techniques … to pull the pieces back together through ‘re-identification’.”[xxiii] In fact, the very techniques used to analyze big data for legitimate purposes are the same advanced algorithms and technologies that allow re-identification of otherwise anonymous data.[xxiv] Moreover, “meaningful de-identification may strip the data of both its usefulness and the ability to ensure its provenance and accountability.”[xxv] In other words, de-identification is not as useful as it once was and further stripping data in an effort to overcome this fact could well extinguish any value the data may have (using the example above, car type and salary may still provide marketers with meaningful information (e.g., individuals with a similar salary may be interested in that car type), but the information “white Honda” alone is worthless). [xxvi]

The consequences of all this are either 1) biomedical researchers are deprived of valuable data or provided meaningless de-identified data, or 2) individuals have no idea that their information is being processed for research purposes. Both the benefits and obstacles relating to big data and biomedical research led to the WH Report’s recognition that we may need “to look closely at the notice and consent framework” because “focusing on controlling the collection and retention of personal data, while important, may no longer be sufficient to protect personal privacy.”xxvii] Further, as the PCAST Report points out, and as reflected in the WH Report, “notice and consent is defeated by exactly the positive benefits that big data enables: new, non-obvious, unexpectedly powerful uses of data.”xxviii So what does this new approach look like?

Alternative Approach to Big Data: Focus on Use, Not Collection[xxix]

The WH Report does not provide specific proposals. Rather, it suggests a framework for a new approach to big data that focuses on the type of use of such data and associated security controls, as opposed to whether notice was provided and consent obtained at the point of its collection. Re-focusing attention to the context and ways big data is used (including the ways in which results generated from big data analysis are used) could have many advantages for individuals and biomedical researchers. For example, as noted above, the notice and consent model places the burden on the individual to manage all of the relevant privacy notices applicable to the processing of that individual’s data and provides no backstop when those efforts fail or no attempt to manage notice provisions is made. Where the attention focuses on the context and uses of data, it shifts the burden of managing privacy expectations to the data collector and it holds entities that utilize big data (e.g., researchers) accountable for how data is used and any negative consequences it yields.[xxx]

The following are some specific considerations drawn from the reports regarding how a potential use framework might work:

  • Provide that all information used by researchers, regardless of the source, is subject to reasonable privacy protections similar to those prescribed under HIPAA.[xxxi] For example, any data relied upon by researchers can only be used and shared for biomedical research purposes.
  • Create special authorities or bodies to determine reasonable uses for big data utilized by researchers so as to realize the potential of big data while preserving individual privacy expectations.[xxxii] This would include recognizing and controlling harmful uses of data, including any actions that would lead to an adverse consequence to an individual.[xxxiii]
  • Develop a central research database for big data accessible to all biomedical researchers, with universal standards and architecture to facilitate controlled access to the data contained therein.[xxxiv]
  • Provide individuals with notice and choice whenever big data is used to make a decision regarding a particular individual.[xxxv]
  • Where individuals may not want certain data to enter the big data ecosystem, allow them to create standardized data use profiles that must be honored by data collectors. Such profiles could prohibit the data collector from sharing any information associated with such individuals or their devices.
  • Require reasonable security measures to protect data and any findings derived from big data, including encryption requirements.[xxxvi] 
  • Regulate inappropriate uses or disclosures of research information, and make parties liable for any adverse consequences of privacy violations.[xxxvii]

By offering these suggestions for public debate, the WH and PCAST reports have only initiated the discussion of a new approach to privacy, big data and biomedical research. Plainly, these proposals bring with them numerous questions and issues that must be answered and resolved before any transition can be contemplated (notably, what are appropriate uses and who determines this?).

Conclusion

Technologies utilizing the iOS 8 SDK, including HealthKit and HomeKit, illustrate the technological growth contributing to the big data environment. The WH and PCAST reports exemplify the endless possibilities that can be derived from this environment, as well as some of the important privacy issues affecting our ability to harness these possibilities. The reports constitute their authors’ consensus view that the existing approach to big data and biomedical research restricts the true potential big data can have on research, while providing individuals with little-to-no meaningful privacy protections. Whether the suggestions contained in the WH and PCAST reports will be – or should be – further developed is an open question that will undoubtedly lead to a healthy debate. Yet, in the case of the PCAST Report, the sheer diversity of players recognizing big data’s potential and associated privacy implications – including, but not limited to, leading representatives and academics from the Broad Institute of Harvard and MIT, UC-Berkeley, Microsoft, Google, National Academy of Engineering, University of Texas at Austin, University of Michigan, Princeton University, Zetta Venture Partners, National Quality Forum and others – provides hope that this potential will one day be realized – in a way that appropriately protects our privacy.[xxxviii]

WH Report Summary: click here.

PCAST Report Summary: click here.

Article By:

Of:

[i] WH Report, p. 2.

[ii] See Apple’s June 2, 2014, press release, Apple Releases iOS 8 SDK With Over 4,000 New APIs, last found at http://www.apple.com/pr/library/2014/06/02Apple-Releases-iOS-8-SDK-With-Over-4-000-New-APIs.html.

[iii] Id.

[iv] Id.

[v] Id.

[vi] Id.

[vii] The White House and PCAST issued summaries of their respective reports, including their policy recommendations, which can be easily found at the links following this article.

[viii] WH Report, p. 7.

[ix] WH Report, p. 2.

[x] WH Report, p. 5.

[xi] WH Report, p. 23.

[xii] WH Report, p. 3.

[xiii] WH Report, p. 23.

[xiv] WH Report, p. 6 (the WH Report includes two research-related examples of the impact of big data on research, including a study whereby the large number of data sets made “the critical difference in identifying the meaningful genetic variant for a disease.”).

[xv] WH Report, p. 23.

[xvi] WH Report, p. 23.

[xvii] WH Report, p. 23.

[xviii] In privacy law, “anonymous” data is often considered a subset of “de-identified” data. “Anonymized” data means the data has been de-identified and is incapable of being re-identified by anyone. “Pseudonymized” data, the other primary subset of “de-identified” data, replaces identifying data elements with a pseudonym (e.g., random id number), but can be re-identified by anyone holding the key. If the key was destroyed, “pseudonymized” data would become “anonymized” data.

[xix] PCAST Report, p. 38.

[xx] PCAST Report, p. 38.

[xxi] WH Report, p. 23.

[xxii] WH Report, p. 23.

[xxiii] WH Report, p. 8.

[xxiv] WH Report, p. 54; PCAST Report, pp. 38-39.

[xxv] WH Report, p. 8.

[xxvi] The PCAST Report does recognize that de-identification can be “useful as an added safeguard.” SePCAST Report, p. 39. Further, other leading regulators and academics consider de-identification a key part of protecting privacy, as it “drastically reduces the risk that personal information will be used or disclosed for unauthorized or malicious purposes.“ Dispelling the Myths Surrounding De-identification: Anonymization Remains a Strong Tool for Protecting Privacy, Ann Cavoukian, Ph.D. and Khaled El Emam, Ph.D. (2011), last found at http://www.ipc.on.ca/images/Resources/anonymization.pdf. Drs. Cavourkian and El Emam argue that “[w]hile it is clearly not foolproof, it remains a valuable and important mechanism in protecting personal data, and must not be abandoned.” Id.

[xxvii] WH Report, p. 54.

[xxviii] PCAST Report, p. 38; WH Report, p. 54.

[xxix] This approach is not one of the official policy recommendations contained in the WH Report. However, as discussed above, the WH Report discusses the impact of big data on biomedical research, as well as this new approach, extensively. Further, to the extent order has any meaning, the first recommendation made in the PCAST Report is that “[p]olicy attention should focus more on the actual uses of big data and less on its collection and analysis.” PCAST Report, pp. 49-50.

[xxx] WH Report, p. 56.

[xxxi] WH Report, p. 24.

[xxxii] WH Report, p. 23.

[xxxiii] PCAST Report, p. 44.

[xxxiv] WH Report, p. 24.

[xxxv] PCAST Report, pp. 48-49.

[xxxvi] PCAST Report, p. 49.

[xxxvii] PCAST Report, pp. 49-50.

[xxxviii] It must be noted that many leading regulators and academics have a different view on the importance and role of notice and consent, and argue that these principles in fact deserve more focus. Seee.g.The Unintended Consequences of Privacy Paternalism, Ann Cavoukian, Ph.D., Dr. Alexander Dix, LLM, and Khaled El Emam, Ph.D. (2014), last found at http://www.privacybydesign.ca/content/uploads/2014/03/pbd-privacy_paternalism.pdf.

Target Becomes a Target: Proposed California Bill Aims to Make Retailers Liable for Data Breach Incidents

MintzLogo2010_Black

Following a string of high-profile data breaches and new data suggesting that approximately 21.3 million customer accounts have been exposed by data breach incidents over the past two years, the California legislature has introduced legislation aimed at making retailers responsible for certain costs in connection with data breach incidents.  If passed in its current form, Assembly Bill 1710, titled the Consumer Data Breach Protection Act, would have a substantial impact on retailers operating in California.

Among the major changes proposed in the bill:

  • Stricter Notification Requirements.  The proposed bill would create stricter time-frames and specific requirements for notification of affected consumers following a data breach incident.  In addition to current requirements to notify consumers individually in the most expedient time possible, a retailer affected by a data breach will be required, within 15 days of the breach incident, to provide email notification to affected individuals, post a general notice on the retailer’s web page and notify statewide media.
  • Retailer Liability for Costs Associated with Data Breach Incidents.  A.B. 1710 would amend California’s Civil Code to make retailers liable for reimbursement of expenses incurred in providing the notices described above, as well as the cost of replacing payment cards of affected individuals.
  • Mandatory Provision of Credit Monitoring Services.  If the person or business required to provide notification under the Civil Code is the source of the breach incident, A.B. 1710 will require that person or business to offer to provide identity theft prevention and mitigation services at no cost to affected consumers for not less than 24 months.
  • Prohibitions Against Storing Payment-Related Data.  Under a new section to be added to the Civil Code, persons or businesses who sell goods or services and accept credit or debit card payments would be prohibited from storing payment-related data unless that person or business stores and retains the data in accordance with a payment data retention and disposal policy that limits retention of the data to only the amount of time required for business, legal and regulatory purposes.  In addition, A.B. 1710 imposes further restrictions on the retention and storage of certain sensitive authentication information, such as social security numbers, drivers’ license numbers and PIN numbers.
  • Authorization of Civil Penalties.  As amended by A.B. 1710, the Civil Code would authorize a prosecutor to bring an action in response to a data breach incident to recover civil penalties of up to $500 per violation, or up to $3,000 for a willful or reckless violation.

Historically measures like A.B. 1710 have faced a difficult road.  Similar bills passed by the California legislature were vetoed twice by Governor Schwarzenegger, and the proposal of A.B. 1710 has already caused the California Retailers Association to speak out against the bill.  However, there may be a critical difference in the current climate because consumer awareness of the danger and reality of breach incidents has never been higher and, as shown by the recent Harris Poll, consumers overwhelmingly believe that merchants are to blame.

Article By:

Of:

Gaga for Gigabit: The FCC (Federal Communications Commission) Liberates 100 MHz of Spectrum for Unlicensed Wi-Fi

Sheppard Mullin 2012

On April 1, the FCC took steps to remedy a small but growing annoyance of modern life:  poor Wi-Fi connectivity.  Removing restrictions that had been in place to protect the mobile satellite service uplinks of Globalstar, and by unanimous vote, the FCC’s First Report and Order on U-NII will free devices for both (i) outdoor operations; and (ii) operation at higher power levels in the 5.15 – 5.25 GHz band (also called the U-NII-1 band).The Report and Order also requires manufacturers to take steps to prevent unauthorized software changes to equipment in the U-NII bands, as well as to impose measures protecting weather and other radar systems in the band.

The practical impact of these rule changes is difficult to overstate.  By removing the operating restrictions in the U-NII-1 band, the FCC essentially doubled the amount of unlicensed spectrum in the 5 GHz band available to consumers.  In the near future, use of this spectrum will help to alleviate congestion on existing Wi-Fi networks, especially outdoor “hotspots” typically used at large public places like airports, stadiums, hotels and convention centers.  Two less-obvious, longer-term benefits also are worth watching.

First, the new IEEE 802.11ac standard for Wi-Fi was finalized in January 2014.  This next generation Wi-Fi standard is capable of delivering vast increases in raw throughput capacity to end-users, often approaching the holy grail of transfer speeds: 1 gigabit.  To achieve those speeds, wide channels of operation are required – channels that simply were not available to Wi-Fi devices.  Now that the U-NII-1 band has been unleashed for Wi-Fi usage, there should be little impediment to the near-term rollout of 802.11ac compatible devices.

This new standard will offer marked improvements in download speeds and streaming quality, and be a boon to consumers who increasingly rely on mobile devices for bandwidth intensive applications such as HD video.  Unsurprisingly, cable operators in particular are excited by the possibilities of this technology; on the day the Report and Order was released, Comcast Chief Technology Officer Tony Werner authored a lengthy blog post touting the possibilities of Comcast offering Gigabit Wi-Fi to its customers utilizing the U-NII-1 band.[2]

Second, in addition to the untempered enthusiasm of the MSOs, wireless carriers also have a stake in this unlicensed spectrum.  Specifically, as use of licensed mobile spectrum continues to expand exponentially, the wireless carriers will increasingly encourage wireless offloading as a means of addressing congestion and capacity issues on macro cellular networks.  For example, Cisco Systems estimates that 45% of global mobile data traffic was offloaded onto the fixed network through Wi-Fi or small cells in 2013.[3]

This transformation of 100 MHz of spectrum in the U-NII-1 band marks one part of a renewed focus on consumer broadband at the FCC.  In addition to unlicensed Wi-Fi, the FCC is also in the middle of a proceeding – covered in an earlier FCC Law Blog post[4] – to streamline rules for wireless infrastructure.  Taken together with the FCC’s release earlier this week of auction rules for 65 MHz of AWS-3 spectrum later this year, it becomes clear that although it is early yet, the Wheeler Commission is gaga for broadband.


[1] U-NII is the acronym for “Unlicensed National Information Infrastructure devices”, unintentional radiators which facilitate broadband access and wireless local area networking, including Wi-Fi.  A copy of the First Report and Order is available here.

[2] See Tony Werner’s blog post here.

[3] See Global Mobile Data Traffic Forecast Update, 2013-2018.

[4] See Sleeper “Small” Cells: The Battle Over The FCC’s Wireless Infrastructure Proceeding.

 

Risky Business: Target Discloses Data Breach and New Risk Factors in 8-K Filing… Kind Of

MintzLogo2010_Black

After Target Corporation’s (NYSE: TGT) net earnings dropped 46% in its fourth quarter compared to the same period last year, Target finally answered the 441 million dollar question – To 8-K, or not to 8-K?  Target filed its much anticipated Current Report on Form 8-K on February 26th, just over two months after it discovered its massive data breach.

In its 9-page filing, Target included two introductory sentences relating to disclosure of the breach under Item 8.01 – Other Events:

During the fourth quarter of 2013, we experienced a data breach in which certain payment card and other guest information was stolen through unauthorized access to our network. Throughout the Risk Factors in this report, this incident is referred to as the ‘2013 data breach’.

Target then buried three new risk factors that directly discussed the breach apparently at random within a total of 18 new risk factors that covered a variety of topics ranging from natural disasters to income taxes.  Appearing in multiple risk factors throughout the 8-K were the following:

  • The data breach we experienced in 2013 has resulted in government inquiries and private litigation, and if our efforts to protect the security of personal information about our guests and team members are unsuccessful, future issues may result in additional costly government enforcement actions and private litigation and our sales and reputation could suffer.
  • A significant disruption in our computer systems and our inability to adequately maintain and update those systems could adversely affect our operations and our ability to maintain guest confidence.
  • We experienced a significant data security breach in the fourth quarter of fiscal 2013 and are not yet able to determine the full extent of its impact and the impact of government investigations and private litigation on our results of operations, which could be material.

An interesting and atypically relevant part of Target’s 8-K is the “Date of earliest event reported” on its 8-K cover page.  Although Target disclosed its fourth quarter 2013 breach under Item 8.01, Target still listed February 26, 2014 as the date of the earliest event reported, which is the date of the 8-K filing and corresponding press release disclosing Target’s financial results.  One can only imagine that this usually benign date on Target’s 8-K was deliberated over for hours by expensive securities lawyers, and that using the February earnings release date instead of the December breach date was nothing short of deliberate.  Likely one more subtle way to shift the market’s focus away from the two-month old data breach and instead bury the disclosure within a standard results of operations 8-K filing and 15 non-breach related risk factors.

To Target’s credit, its fourth quarter and fiscal year ended on February 1, 2014, and Target’s fourth quarter included the entirety of the period during and after the breach through February 1.  Keeping that in mind, Target may not have had a full picture of how the breach affected its earnings in the fourth quarter until it prepared its fourth quarter and year-end financial statements this month.  Maybe the relevant “Date of earliest event” was the date on which Target was able to fully appreciate the effects of the breach, which occurred on the day that it finalized and released its earnings on February 26.  But maybe not.

Whatever the case may be, Target’s long awaited 8-K filing is likely only a short teaser of the disclosure that should be included in Target’s upcoming Form 10-K filing.

Article by:

Adam M. Veness

Of:

Mintz, Levin, Cohn, Ferris, Glovsky and Popeo, P.C.

Dealing with Personal Information at the Water’s Edge… Re: U.S. Safe Harbor Program

Jackson Lewis Logo

 

Privacy and data security issues and concerns do not stop at the water’s edge. Companies needing to share personal information, even when the sharing will take place inside the same “company,” frequently run into challenges when that sharing takes place across national borders. In some ways, the obstacles created by the matrix of federal and state data privacy and security laws in the U.S. are dwarfed by the matrix that exists internationally. Most countries regulate to some degree the handling of data, from access, to processing, to disclosure and destruction. And, the law continues to develop rapidly, sometimes due to unexpected events. Take, for example, the U.S. Safe Harbor programthat was designed to facilitate the transfer of personal data of individuals in the European Union (EU) to the United States. Because the EU believes that the law in some countries, including the U.S., fails to provide “adequate safeguards,” the general rule is that personal data of EU persons cannot be sent to the U.S. unless an exception applies. One exception is based on a negotiated deal between the EU and the U.S., commonly known as the U.S. Safe Harbor, a program which currently is in some jeopardy due to the recent reports of NSA monitoring, Snowden, etc.

data information EU European Union world

Currently, to meet the Safe Harbor, a company must take certain steps, including (i) appointing a privacy ombudsman; (ii) reviewing and auditing data privacy practices; (iii) establishing a data privacy policy that addresses the following principles: notice, choice, onward transfer of data, security, integrity, access and enforcement; (iv) implementing privacy and enforcement procedures; (v) obtaining consents and creating inventory of consents for certain disclosures; and (vi) self-certifying compliance to the U.S. Department of Commerce.

A recent statement from Viviane Reding, European Commissioner for Justice, Fundamental Rights and Citizenship, quoted in The Guardian, October 17, 2013, signals some changes may be in store for the Safe Harbor:

The Safe Harbour may not be so safe after all. It could be a loophole because it allows data transfers from EU to US companies, although US data protection standards are lower than our European ones,” said Reding. “Safe Harbour is based on self-regulation and codes of conduct. In the light of the recent revelations, I am not convinced that relying on codes of conduct and self-regulation that are not policed in a strict manner offer the best way of protecting our citizens.

At the same time, the EU continues to update and strengthen its protections for personal data. Companies that operate globally need to be sensitive to not only complying with the laws specific to activities within a jurisdiction, but also to activities between jurisdictions. Common business decisions such as deciding where data will be stored, setting up global databases for employees medical, personnel and other information, arranging for enterprise-wide employee benefits or monitoring programs, can face significant obstacles relating to the interplay of the data privacy and security laws of the countries involved.

Article by:

Joseph J. Lazzarotti

By:

Jackson Lewis P.C.

New Online Privacy Policy Requirements Take Effect January 1, 2014

VedderPriceLogo

 

California Online Privacy Protection Act (CalOPPA)

Owners of websites, online services or mobile applications (apps) that can be accessed or used by California residents should ensure their compliance with the new amendments to the California Online Privacy Protection Act of 2003 (CalOPPA) by the law’s January 1, 2014 effective date.  The borderless nature of the Internet makes this law applicable to almost every website or online service and mobile application.  Accordingly, companies should review and revise their online privacy policies to ensure compliance with the new law and avoid potentially significant penalties.

Previously, CalOPPA required the owner of any website or online service operated for commercial purposes (an “operator”) that collects California residents’ personally identifiable information (PII) to conspicuously post a privacy policy that met certain content requirements, including identifying the types of PII collected and the categories of third parties with whom that information is shared. The new law requires that companies subject to CalOPPA provide the following additional disclosures in their privacy policies.

  • How an operator responds to “do not track” signals from Internet browsers and any other mechanism that provides consumers a choice regarding the collection of PII about an individual consumer’s online activities over time and across third-party websites and online services.  A company may satisfy this requirement by revising its privacy policy to include the new disclosures or by providing a clear and conspicuous hyperlink to a webpage that contains a description of any program or protocol the company follows to provide consumers a choice about tracking, including the effects of the consumer’s choice.
  • An affected company must disclose to users whether third parties may collect PII about a user’s online activities over time and across different websites when a consumer uses the operator’s website or online service. However, an operator is not required to disclose the identities of such third parties.

The California law does not require that operators honor a user’s “do not track” signals. Instead, operators must only provide users with a disclosure about how the website or mobile app will respond to such mechanisms. “Do not track” mechanisms are typically small pieces of code, similar to cookies, that signal to websites or mobile apps that the user does not want his or her website or app activities tracked by the operator, including through analytics tools, advertising networks, and other types of data collection and tracking practices.  Further, the Privacy Enforcement and Protection Unit of the California Office of the Attorney General recently stated that the required disclosures should not be limited to tracking simply for online behavioral advertising purposes, but those disclosures must extend to any other purpose for which online behavioral data is collected by a business’s website (e.g., market research, website analytics, website operations, fraud detection and prevention, or security).

A violation of the law can result in a civil fine of up to $2,500 per incident. The California Attorney General maintains that each noncompliant mobile app download constitutes a single violation and that each download may trigger a fine.

Given that most company websites will have California visitors, companies should consider taking the following steps to ensure compliance with the CalOPPA amendments by January 1, 2014:

  • Identify the tracking mechanisms in place on your company’s websites and online services, including (a) the specific types of PII collected by the tracking mechanism and (b) whether users have the option to control whether and how the mechanisms are used and how the website responses responds to “do not track” signals by seeking input from those familiar with your website, including (i) technicians and developers who understand the mechanics of how the website operates, including how it responds to “do not track signals,” (ii) financial and marketing personnel who understand how user PII is monetized, and (iii) any other stakeholders who access or handle user PII.
  •  Review the practices of any third parties that have the ability to track users on your website. To draft the new disclosures, you will need to understand how those third parties track your users and whether they are capable of doing so before or after the users leave your service.
  • Incorporate the information identified above to modify your online privacy policy to include the required behavioral tracking disclosures.
  • Retain the prior version of the policy in your records, including the date on which each version was posted to the site. The new version should have an updated effective date to distinguish it from the previous version.

Expansion of California’s Data Breach Notification Requirements

Under another new law taking effect on January 1, 2014, California will expand its data breach notification requirements by adding new types of information to the definition of “personal information” under California Civil Code §§ 1798.29 and 1798.82. The new law requires notification if a California resident’s personal information is compromised, and, as with CalOPPA, the breach notification requirements apply regardless of the location of the organization that sustains the breach.  Therefore, to the extent that your business collects and retains California residents’ PII, then the amended California breach notification law would apply.

Previously, the California law required notification of a data breach in the event of the unauthorized access to or disclosure of an individual’s name, in combination with that individual’s (i) Social Security number, (ii) driver’s license or California ID number, (iii) account, credit or debit card number, together with a security or access code, (iv) medical information, or (v) health information, where either the name or the other piece of information was not encrypted. Under the new definition, “personal information” will also include “[a] user name or email address, in combination with a password or security question and answer that would permit access to an online account.”

Accordingly, if your business or organization collects this type of information, then it should consider undertaking the following proactive measures to reduce the risk and magnitude of a potential data breach:

  • Periodically and systematically delete nonessential personal information. By deleting obsolete PII and other sensitive information, businesses can significantly reduce the risk of a breach.  Retaining such obsolete legacy PII serves no business purpose, but only adds unnecessary exposure and potential liability.
  • Conduct a PII inventory and perform a risk assessment of your security measures.  Identify what PII is being collected by your organization, where it is retained, who has access to the PII and  the security measures to protect the PII.  Ensuring that sufficient protections are in place may not prevent every incident, but they can reduce the possibility of an incident occurring in the first place and limit the disruption to your business if there is a breach.
  • Limit the disclosure of PII to third parties only when necessary to provide services or products. You can be equally responsible for a data breach notification if the person or entity who experiences the data breach was a third party who received PII from you. Any vendor or third party with whom you share PII should contractually represent and warrant that they have in place certain standards for protecting that information and agree to indemnify your company for any loss that results from a breach.

 

Article by:

Of:

Vedder Price

Are You Ready for the Coming Explosion of Cybersquatting?

Dickinson Wright Logo

 

The next wave of domain-name barbarians is gathering outside the gates. Here’s what you need to do now to keep your trademarks, and your e-commerce, safe.

Almost every business has had to deal with cybersquatters – pirates that launch web sites designed to divert customers by using domain names that mimic the business’s trademarks.

Until now, the war has focused primarily on domain names within the “.com” sphere. But the battlefront is about to expand – dramatically.

The international body that runs the Internet (called ICANN) has recently begun releasing new generic top-level domains (“gTLDs”). In addition to the familiar “.com,” this program makes it possible to set up a business name, a trademark, a geographic designation – virtually any word in any language – as a gTLD in its own right. Almost 2,000 applications for gTLDs were filed, and more than 1,000 will ultimately be granted. Because many of the new gTLDs will sell domain names to all comers without any attention to whether they are violative of someone else’s trademark rights, they will create a giant new arena in which domain name pirates can operate.

So what should you do now to protect your brands and your domain names?

1. Lock up the family jewels.

ICANN has mandated the creation of a Trade Mark Clearing House, in which owners can list their registered trademarks. It has also required that all newly-released gTLDs offer a 30-day “Sunrise” period in which owners of marks listed in the TMCH get first crack at registering them as domain names. In addition, during the Sunrise period and for sixty days thereafter, other parties that apply for those marks will be advised of the TMCH listing and, if they pursue their application, the owners of the TMCH-listed marks will be notified, giving them an opportunity to invoke various dispute-resolution procedures.

The Trademark Clearance House is now in operation, and it makes sense for brand owners to list at least their “core” trademarks there. These are the marks in which you have invested the most time, energy, and money; the ones most closely associated with your business; the ones you have already had to protect most often in the .com realm.

2. Plan now to make preemptive registrations in gTLDs of particular interest.

An important limitation of the Trade Mark Clearing House is that it protects only against domain names that are identical to your registered trademarks, not to common misspellings, typos, and so on. This leads to a second important step: being prepared to file preemptive domain name registrations for common variations of your brand.

Now is the time to identify specific gTLDs in which you will be especially interested in and to watch for their release dates. For instance, if you’re in the auto industry you will likely want to be active in such gTLDs as “.auto,” “.car,” and the like. As soon as the Sunrise period for one of your identified gTLDs opens, be ready to file immediately. This is an instance where the best defense is a vigorous offense.

Many brand owners were caught unawares years ago when the Internet burst upon the scene, and control of brand-related domain names became crucial. There’s no way to stop the next wave of cyberpiracy. But there’s also no reason not to be prepared for it.

Article by:

John C. Blattner

Of:

Dickinson Wright PLLC

Google Glass In the Workplace

Jackson Lewis Logo

WSJ reported on November 22, 2013, Google’s push to move Google Glass, a computerized device with an “optical head-mounted display,” into the mainstream by tapping the prescription eyewear market through VSP Global—a nationwide vision benefits provider and maker of frames and lenses. If the speed and immersion of technology over the past few years had shown us anything, it is that it will not be too long before employees are donning Google Glass on the job, putting yet another twist on technology’s impact on the workplace.

Employers continue to adjust to the influx of personal smartphones in the workplace, many adopting “Bring Your Own Device” (BYOD) strategies and policies. These technologies have no doubt been beneficial to businesses and workplace around the globe. The introduction of Google Glass into the workplace may have similar benefits, but the technology also could amplify many of the same challenges as other personal devices, and create new ones.

For example, employers may experience productivity losses as employees focus on their Glass eye piece and not their managers, co-workers, customers. Likewise, some businesses will need to consider whether Google Glass may contribute to a lack of attention to tasks that can create significant safety risks for workers and customers, such as for employees who drive or use machinery as a regular part of their jobs.

A popular feature of Google Glass is the ability to record audio and video. Smartphones and other devices do this already, but recording with Glass seems so much easier and become potentially less obvious overtime as we get used to seeing folks with the Glass. Of course, recording of activities and conversations in the workplace raise a number of issues. In healthcare, for instance, employees might capture protected health information with their devices, but potentially without the proper protections under HIPAA. Conversations recorded without the consent of the appropriate parties can violate the law in a number of states. Employees with regular access to sensitive financial information could easily capture a wealth of personal data, raising yet another data privacy and security risk.

The capturing of data on the Glass, even if not collected, used or safeguarded improperly, will add to the challenges businesses have to avoid spoliation of data stored in these additional repositories of potentially relevant evidence.

Only time and experience will tell what the impact of Google Glass will be in the workplace. However, as companies continue to adapt to present technologies, they should be keeping an eye on the inevitable presence of such new technologies, and avoid being caught without a strategy for reducing risks and avoidable litigation.

Article by:

Joseph J. Lazzarotti

Of:

Jackson Lewis LLP

California Enacts New Data Privacy Laws

Sheppard Mullin 2012

As part of a flurry of new privacy legislation, California Governor Jerry Brown signed two new data privacy bills into law on September 27, 2013: S.B. 46 amending California’s data security breach notification law and A.B. 370 regarding disclosure of “do not track” and other tracking practices in online privacy policies. Both laws will come into effect on January 1, 2014.

New Triggers for Data Security Breach Notification

California law already imposes a requirement to provide notice to affected customers of unauthorized access to, or disclosure of, personal information in certain circumstances. S.B. 46 adds to the current data security breach notification requirements a new category of data triggering these notification requirements: A user name or email address, in combination with a password or security question and answer that would permit access to an online account.

Where the information subject to a breach only falls under this new category of information, companies may provide a security breach notification in electronic or other form that directs affected customers to promptly change their passwords and security questions or answers, as applicable, or to take other steps appropriate to protect the affected online account and all other online accounts for which the customer uses the same user name or email address and password or security question or answer. In the case of login credentials for an email account provided by the company, the company must not send the security breach notification to the implicated email address, but needs to provide notice by one of the other methods currently provided for by California law, or by clear and conspicuous notice delivered to the affected user online when the user is connected to the online account from an IP address or online location from which the company knows the user ordinarily accesses the account.

Previously, breach notification in California was triggered only by the unauthorized acquisition of an individual’s first name or initial and last name in combination with one or more of the following data elements, when either the name or the data elements are unencrypted: social security number; driver’s license or state identification number; account, credit card or debit card number in combination with any required security or access codes; medical information; or health information. S.B. 46 not only expands the categories of information the disclosure of which may trigger the requirement for notification, it also—perhaps unintentionally—requires notification of unauthorized access to user credential information even if that information is encrypted. Thus, S.B. 46 significantly expands the circumstances in which notification may be required.

New Requirements for Disclosure of Tracking Practices

A.B. 370 amends the California Online Privacy Protection Act (CalOPPA) to require companies that collect personally identifiable information online to include information about how they respond to “do not track” signals, as well as other information about their collection and use of personally identifiable information. The newly required information includes:

  • How the company responds to “do not track” signals or other mechanisms that provide consumers the ability to exercise choice over the collection of personally identifiable information about their online activities over time and across third-party websites or online services, if the company collects such information; and
  • Whether third parties may collect personally identifiable information about a consumer’s online activities over time and across different websites when a consumer uses the company’s website.

These disclosures have to be included in a company’s privacy policy. In order to comply with the first requirement, companies may provide a clear and conspicuous hyperlink in their privacy policy to an online description of any program or protocol the company follows that offers the user that choice, including its effects.

It’s important to note that the application of CalOPPA is broad. It applies to any “operator of a commercial Web site or online service that collects personally identifiable information through the Internet about individual consumers residing in California who use or visit its commercial Web site or online service.” As it is difficult to do business online without attracting users in technologically sophisticated and demographically diverse California, these provisions will apply to most successful online businesses.

What to Do

In response to the passage of these new laws, companies should take the opportunity to examine their data privacy and security policies and practices to determine whether any updates are needed. Companies should review and, if necessary, revise their data security breach plans to account for the newly added triggering information as well as the new notification that may be used if that information is accessed. Companies who collect personally identifiable information online or through mobile applications should review their online tracking activities and their privacy policies to determine whether and what revisions are necessary. The California Attorney General interprets CalOPPA to apply to mobile applications that collect personally identifiable information, so companies that provide such mobile apps should remember to include those apps in their review and any update.

Article By:

 of