Office for Civil Rights (OCR) to Begin Phase 2 of HIPAA Audit Program

Mcdermott Will Emery Law Firm

The U.S. Department of Health and Human Services’ Office for Civil Rights (OCR) will soon begin a second phase of audits (Phase 2 Audits) of compliance with Health Insurance Portability and Accountability Act of 1996 (HIPAA) privacy, security and breach notification standards (HIPAA Standards) as required by the Health Information Technology for Economic and Clinical Health (HITECH) Act. Unlike the pilot audits during 2011 and 2012 (Phase 1 Audits), which focused on covered entities, OCR will conduct Phase 2 Audits of both covered entities and business associates.  The Phase 2 Audit Program will focus on areas of greater risk to the security of protected health information (PHI) and pervasive noncompliance based on OCR’s Phase I Audit findings and observations, rather than a comprehensive review of all of the HIPAA Standards.  The Phase 2 Audits are also intended to identify best practices and uncover risks and vulnerabilities that OCR has not identified through other enforcement activities.  OCR will use the Phase 2 Audit findings to identify technical assistance that it should develop for covered entities and business associates.  In circumstances where an audit reveals a serious compliance concern, OCR may initiate a compliance review of the audited organization that could lead to civil money penalties.

The following sections summarize OCR’s Phase 1 Audit findings, describe the Phase 2 Audit program and identify steps that covered entities and business associates should take to prepare for the Phase 2 Audits.

Phase 1 Audit Findings

OCR audited 115 covered entities under the Phase 1 Audit program, with the following aggregate results:

  • There were no findings or observations for only 11% of the covered entities audited;
  • Despite representing just more than half of the audited entities (53%), health care providers were responsible for 65% of the total findings and observations;
  • The smallest covered entities were found to struggle with compliance under all three of the HIPAA Standards;
  • Greater than 60% of the findings or observations were Security Standard violations, and 58 of 59 audited health care provider covered entities had at least one Security Standard finding or observation even though the Security Standards represented only 28% of the total audit items;
  • Greater than 39% of the findings and observations related to the Privacy Standards were attributed to a lack of awareness of the applicable Privacy Standard requirement; and
  • Only 10% of the findings and observations were attributable to a lack of compliance with the Breach Notification Standards

The Phase 2 Audit Program

Selection of Phase 2 Audit Recipients

Unlike the Phase 1 Audit Program, which focused on covered entities, OCR will conduct Phase 2 Audits of both covered entities and business associates.  OCR has randomly selected a pool of 550–800 covered entities through the National Provider Identifier database and America’s Health Insurance Plans’ databases of health plans and health care clearinghouses.  OCR will issue a mandatory pre-audit screening survey to the pool of covered entities this summer.  The survey will address organization size measures, location, services and contact information.  Based on the responses, the agency will select approximately 350 covered entities, including 232 health care providers, 109 health plans and 9 health care clearinghouses, for Phase 2 Audits.  OCR intends to select a wide range of covered entities and will conduct the audits between October 2014 and June 2015.

OCR will notify and send data requests to the 350 selected covered entities this fall.  The data requests will ask the covered entities to identify and provide contact information for their business associates.  OCR will select the business associates that will participate in the Phase 2 Audits from this pool.

Audit Process

OCR will audit approximately 150 of the 350 selected covered entities and 50 of the selected business associates for compliance with the Security Standards, 100 covered entities for compliance with the Privacy Standards and 100 covered entities for compliance with the Breach Notification Standards.  OCR will initiate the Phase 2 Audits of covered entities by sending the data requests this fall and then initiate the Phase 2 Audits of business associates in 2015.

Covered entities and business associates will have two weeks to respond to OCR’s audit request.  The data requests will specify the content, file names and other documentation requirements, and the auditors may contact the covered entities and business associates for clarifications or additional documentation.  OCR will only consider current documentation that is submitted on time.  Failure to respond to a request could lead to a referral to the applicable OCR Regional Office for a compliance review.

Unlike the Phase 1 Audits, OCR will conduct the Phase 2 Audits as desk reviews with an updated audit protocol and not on-site at the audited organization.  OCR will make the Phase 2 Audit protocol available on its website so that entities may use it for internal compliance assessments.

The Phase 2 Audits will target HIPAA Standards that were sources of high numbers of non-compliance in the Phase 1 Audits, including:  risk analysis and risk management; content and timeliness of breach notifications; notice of privacy practices; individual access; Privacy Standards’ reasonable safeguards requirement; training to policies and procedures; device and media controls; and transmission security.  OCR also projects that Phase 2 Audits in 2016 will focus on the Security Standards’ encryption and decryption requirements, facility access control, breach reports and complaints, and other areas identified by earlier Phase 2 Audits.  Phase 2 Audits of business associates will focus on risk analysis and risk management and breach reporting to covered entities.

OCR will present the organization with a draft audit report to allow management to comment before it is finalized.  OCR will then take into account management’s response and issue a final report.

What Should You Do to Prepare for the Phase 2 Audits?

Covered entities and business associates should take the following steps to ensure that they are prepared for a potential Phase 2 Audit:

  • Confirm that the organization has recently completed a comprehensive assessment of potential security risks and vulnerabilities to the organization (the Risk Assessment);
  • Confirm that all action items identified in the Risk Assessment have been completed or are on a reasonable timeline to completion;
  • Ensure that the organization has a complete inventory of business associates for purposes of the Phase 2 Audit data requests;
  • If the organization has not implemented any of the Security Standards’ addressable implementation standards for any of its information systems, confirm that the organization has documented (i) why any such addressable implementation standard was not reasonable and appropriate and (ii) all alternative security measures that were implemented;
  • Ensure that the organization has implemented a breach notification policy that accurately reflects the content and deadline requirements for breach notification under the Breach Notification Standards;
  • Health care provider and health plan covered entities should ensure that they have a compliant Notice of Privacy Practices and not only a website privacy notice;
  • Ensure that the organization has reasonable and appropriate safeguards in place for PHI that exists in any form, including paper and verbal PHI;
  • Confirm that workforce members have received training on the HIPAA Standards that are necessary or appropriate for a workforce member to perform his/her job duties;
  • Confirm that the organization maintains an inventory of information system assets, including mobile devices (even in a bring your own device environment);
  • Confirm that all systems and software that transmit electronic PHI employ encryption technology or that the organization has a documented the risk analysis supporting the decision not to employ encryption;
  • Confirm that the organization has adopted a facility security plan for each physical location that stores or otherwise has access to PHI, in addition to a security policy that requires a physical security plan; and
  • Review the organization’s HIPAA security policies to identify any actions that have not been completed as required (e.g., physical security plans, disaster recovery plan, emergency access procedures, etc.)
ARTICLE BY

Of:

Proposed Health Information Technology Strategy Aims to Promote Innovation

Sheppard Mullin 2012

On April 7, 2014, the Food and Drug Administration (FDA), in consultation with theOffice of the National Coordinator for Health Information Technology (ONC) and the Federal Communications Commission (FCC) released a draft report addressing a proposed strategy and recommendations on an “appropriate, risk-based regulatory framework pertaining to health information technology.”

This report, entitled “FDASIA Health IT Report: Proposed Strategy and Recommendations for a Risk-Based Framework”, was mandated by Section 618 of the Food and Drug Administration and Innovation Act, and establishes a proposed blueprint for the regulation of health IT.  The FDA, ONC and FCC (the Agencies) noted that risk and controls on such risk should focus on health IT functionality, and proposed a flexible system for categorizing health IT and evaluating the risks and need for regulation for each category.

The Agencies set out four key priority areas: (1) promote the use of quality management principles, (2) identify, develop, and adopt standards and best practices, (3) leverage conformity assessment tools, and (4) create an environment of learning and continual improvement.

The Agencies are seeking public comment on the specific principles, standards, practices, and tools that would be appropriate as part of this regulatory framework.  In addition, the Agencies propose establishing a new Health IT Safety Center that would allow reporting of health IT-related safety events that could then be disseminated to the health IT community.

The Agencies also divided health IT into three broad functionality-based groups: (1) administrative, (2) health management, and (3) medical device. The Agencies noted that health IT with administrative functionality, such as admissions, billing and claims processing, scheduling, and population health management pose limited or no risk to the patient, and as a result no additional oversight is proposed.

Health IT with health management functionality, such as health information and data exchange, data capture and encounter documentation, provider order entry, clinical decision support, and medication management, would be subject the regulatory framework proposed in the report.  In addition, the FDA stated that a product with health management functionality that meets the statutory definition of a medical device would not be subject to additional oversight by the FDA.

The report had a spotlight on clinical decision support (CDS), which provides health care providers and patients with knowledge and person-specific information, intelligently filtered or presented at appropriate times, to enhance health and health care.  The report concluded that, for the most part, CDS does not replace clinicians’ judgment, but rather assists clinicians in making timely, informed, higher quality decisions.  These functionalities are categorized as health management IT, and the report believes most CDS falls into this category.

However, certain CDS software – those that are medical devices and present higher risks – warrant the FDA’s continued focus and oversight.  Medical device CDS includes computer aided detection/diagnostic software, remote display or notification of real-time alarms from bedside monitors, radiation treatment planning, robotic surgical planning and control, and electrocardiography analytical software.

The FDA intends to focus its oversight on health IT with medical device functionality, such as described above with respect to medical device CDS.  The Agencies believe that this type of functionality poses the greatest risk to patient safety, and therefore would be the subject of FDA oversight.  The report recommends that the FDA provide greater clarity related to medical device regulation involving health IT, including: (1) the distinction between wellness and disease-related claims, (2) medical device accessories, (3) medical device CDS software, (4) medical device software modules, and (5) mobile medical apps.

The comment period remains open through July 7, 2014, and therefore the report’s recommendations may change based on comments received by the Agencies. In the meantime, companies in the clinical software and mobile medical apps industry should follow the final guidance recently published by the FDA with respect to regulation of their products.

In the meantime, health information technology companies should follow the final guidance recently published by the FDA with respect to regulation of their products.

Article By:

Of:

The White House Big Data Report & Apple’s iOS 8: Shining the Light on an Alternative Approach to Privacy and Biomedical Research

DrinkerBiddle

Big data derives from “the growing technological ability to capture, aggregate, and process an ever-greater volume, velocity, and variety of data.”[i] Apple’s just-releasediOS 8 software development kit (“iOS 8 SDK”) highlights this growth.[ii] The iOS 8 SDK touts over 4,000 application programming interface calls including “greater extensibility” and “new frameworks.”iii For example, HomeKit and HealthKit, two of these new frameworks, serve as hubs for data generated by other applications and provide user interfaces to manage that data and related functionality.[iv] HealthKit’s APIs “provide the ability for health and fitness apps to communicate with each other … to provide a more comprehensive way to manage your health and fitness.”[v] HomeKit integrates home automation functions in a central location within the iOS device, allowing users to lock/unlock doors, turn on/off cameras, change or view thermostat settings, turn lights on/off, open garage doors and more – all from a single app.[vi] The iOS 8 SDK will inevitably lead to the development of countless apps and other technologies that “capture, aggregate, and process an ever-greater volume, velocity, and variety of data,” contributing immense volumes of data to the already-gargantuan big data ecosystem.

In the context of our health and wellbeing, big data – which includes, but is definitely not limited to, data generated by future iOS 8-related technologies – has boundless potential and can have a momentous impact on biomedical research, leading to new therapies and improved health outcomes. The big data reports recently issued by the White House and the President’s Council of Advisors on Science and Technology (“PCAST”) echo this fact. However, these reports also emphasize the challenges posed by applying the current approach to privacy to big data, including the focus on notice and consent.

After providing some background, this article examines the impact of big data on medical research. It then explores the privacy challenges posed by focusing on notice and consent with respect to big data. Finally, this article describes an alternative approach to privacy suggested by the big data reports and its application to biomedical research.

Background

On May 1, 2014, the White House released its report on big data, “Big Data: Seizing Opportunities, Preserving Values” (“WH Report”). The WH Report was supported by a separate effort and report produced by PCAST, “Big Data and Privacy: A Technological Perspective” (“PCAST Report”).[vii] The privacy implications of the eports on biomedical research – an area where big data can arguably have the greatest impact – are significant.

Notice and consent provide the foundation upon which privacy laws are built. Accordingly, it can be difficult to envision a situation where these conceptual underpinnings, while still important, begin to yield to a new approach. However, that is exactly what the reports suggest in the context of big data. As HealthKit and iOS 8 SDK demonstrate, we live in a world where health data is generated in numerous ways, both inside and outside of the traditional patient-doctor relationship. If given access to all this data, researchers can better analyze the effectiveness of existing therapies, develop new therapies faster, and more accurately predict and suggest measures to avoid the onset of disease, all leading to improved health outcomes. However, existing privacy laws often restrict researchers’ access to such data without first soliciting and obtaining proof of appropriate notice and consent.[viii] Focusing on individual notice and consent in some instances can be unnecessarily restrictive and can stall the discovery and development of new therapies. This is exacerbated by the fact that de-identification (or pseudonymization) – a process typically relied upon to alleviate some of these obstacles – is losing its effectiveness or would require stripping data of much meaningful value. Recognizing these flaws, the WH Report suggests a new approach where the focus is taken off of the collection of data and turned to the ways in which parties, including biomedical researchers, use data – an approach that allows researchers to maximize the possibilities of big data, while protecting individual privacy and ensuring that data is processed in a reasonable way.

The Benefits of Big Data to Biomedical Research

Before discussing why a new approach to privacy in the context of big data and biomedical research may be necessary, it is first important to understand the role of big data in research. As noted, the concept of big data encompasses “the growing technological ability to capture, aggregate, and process an ever-greater volume, velocity, and variety of data.”[ix] The word “growing” is essential here, as the sources of data contributing to the big data ecosystem are extensive and will continue to expand, especially as Internet-enabled devices such as those contemplated by HomeKit continue to develop.[x] These sources include not only the traditional doctor-patient relationship, but also consumer-generated and other non-traditional sources of health data such as those contemplated by HealthKit, including wearable technologies (e.g., Fitbit), patient-support sites (e.g., PatientsLikeMe.com), wellness programs, electronic/personal health records, etc. These sources expand even further when non-health data is combined with lifestyle and financial data.[xi]

The WH Report recognizes that these new abilities to collect and process information have the potential to bring about “unexpected … advancements in our quality of life.”[xii] The ability of researchers to analyze this vast amount of data can help “identify clinical treatments, prescription drugs, and public health interventions that may not appear to be effective in smaller samples, across broad populations, or using traditional research methods.”[xiii] In some instances, big data can in fact be the necessary component of a life-changing discovery.[xiv]

Further, the WH Report finds that big data holds the key to fully realizing the promise of predictive medicine, whereby doctors and researchers can fully analyze an individual’s health status and genetic information to better predict the onset of disease and/or how an individual might respond to specific therapies.[xv] These findings have the ability to affect not only particular patients but also family members and others with a similar genetic makeup.[xvi] It is worth noting that the WH Report highlights bio-banks and their role in “confronting important questions about personal privacy in the context of health research and treatment.”[xvii]

In summary, big data has a profound impact on biomedical research and, as a necessary result, on those that benefit from the fruits of researchers’ labor. The key to its realization is a privacy regime that can unlock for researchers vast amounts of different types of data obtained from diverse sources.

Problems With the Current Approach

Where the use of information is not directly regulated by the existing privacy framework, providing consumers with notice and choice regarding the processing of their personal information has become the de facto rule. Where the collection and use of information is specifically regulated (e.g., HIPAA, FCRA, etc.), notice and consent is required whenever information is used or shared in a way not permitted under the relevant statute. For example, under HIPAA, a doctor can disclose a patient’s personal health information for treatment purposes (permissible use) but would need to provide the patient with notice and obtain consent before disclosing the same information for marketing purposes (impermissible use). To avoid this obligation, entities seeking to share data in a way not described in the privacy notice and/or permitted under applicable law can de-identify the data, to purportedly make the data anonymous (for example, John Smith drives a white Honda and makes $55,000/year (identified) v. Person X drives a white Honda and makes $55,000/year (de-identified)).[xviii] Except under very limited circumstances (e.g., HIPAA limited data sets), the requirements regarding notice and consent apply equally to biomedical research as to more commercial uses.

In the context of big data, the first problem with notice and consent is that it places an enormous burden on the individual to manage all of the relevant privacy notices applicable to the processing of that individual’s data. In other words, it requires individuals to analyze each and every privacy notice applicable to them (which could be hundreds, if not more), determine whether those data collectors share information and with whom, and then attempt to track that information down as necessary. As the PCAST Report not-so-delicately states, “[i]n some fantasy world, users actually read these notices, understand their legal implications (consulting their attorneys if necessary), negotiate with other providers of similar services to get better privacy treatment, and only then click to indicate their consent. Reality is different.”[xix] This is aggravated by the fact that relevant privacy terms are often buried in privacy notices using legalese and provided on a take-it-or-leave-it basis.[xx] Although notice and consent may still play an important role where there is a direct connection between data collectors and individuals, it is evident why such a model loses its meaning when information is collected from a number of varied sources and those analyzing the data have no direct relationship with individuals.

Second, even where specific privacy regulations apply to the collection and use of personal information, such rules rarely consider or routinely allow for the disclosure of that information to researchers for biomedical research purposes, thus requiring researchers to independently provide notice and obtain consent. As the WH Report points out, “[t]he privacy frameworks that currently cover information now used in health may not be well suited to … facilitate the research that drives them.”[xxi] And as previously noted, often times biomedical researchers require non-health information, including lifestyle and financial data, if they want to maximize the benefits of big data. “These types of data are subjected to different and sometimes conflicting federal and state regulation,” if any regulation at all.[xxii]

Lastly, the ability to overcome de-identification is becoming easier due to “effective techniques … to pull the pieces back together through ‘re-identification’.”[xxiii] In fact, the very techniques used to analyze big data for legitimate purposes are the same advanced algorithms and technologies that allow re-identification of otherwise anonymous data.[xxiv] Moreover, “meaningful de-identification may strip the data of both its usefulness and the ability to ensure its provenance and accountability.”[xxv] In other words, de-identification is not as useful as it once was and further stripping data in an effort to overcome this fact could well extinguish any value the data may have (using the example above, car type and salary may still provide marketers with meaningful information (e.g., individuals with a similar salary may be interested in that car type), but the information “white Honda” alone is worthless). [xxvi]

The consequences of all this are either 1) biomedical researchers are deprived of valuable data or provided meaningless de-identified data, or 2) individuals have no idea that their information is being processed for research purposes. Both the benefits and obstacles relating to big data and biomedical research led to the WH Report’s recognition that we may need “to look closely at the notice and consent framework” because “focusing on controlling the collection and retention of personal data, while important, may no longer be sufficient to protect personal privacy.”xxvii] Further, as the PCAST Report points out, and as reflected in the WH Report, “notice and consent is defeated by exactly the positive benefits that big data enables: new, non-obvious, unexpectedly powerful uses of data.”xxviii So what does this new approach look like?

Alternative Approach to Big Data: Focus on Use, Not Collection[xxix]

The WH Report does not provide specific proposals. Rather, it suggests a framework for a new approach to big data that focuses on the type of use of such data and associated security controls, as opposed to whether notice was provided and consent obtained at the point of its collection. Re-focusing attention to the context and ways big data is used (including the ways in which results generated from big data analysis are used) could have many advantages for individuals and biomedical researchers. For example, as noted above, the notice and consent model places the burden on the individual to manage all of the relevant privacy notices applicable to the processing of that individual’s data and provides no backstop when those efforts fail or no attempt to manage notice provisions is made. Where the attention focuses on the context and uses of data, it shifts the burden of managing privacy expectations to the data collector and it holds entities that utilize big data (e.g., researchers) accountable for how data is used and any negative consequences it yields.[xxx]

The following are some specific considerations drawn from the reports regarding how a potential use framework might work:

  • Provide that all information used by researchers, regardless of the source, is subject to reasonable privacy protections similar to those prescribed under HIPAA.[xxxi] For example, any data relied upon by researchers can only be used and shared for biomedical research purposes.
  • Create special authorities or bodies to determine reasonable uses for big data utilized by researchers so as to realize the potential of big data while preserving individual privacy expectations.[xxxii] This would include recognizing and controlling harmful uses of data, including any actions that would lead to an adverse consequence to an individual.[xxxiii]
  • Develop a central research database for big data accessible to all biomedical researchers, with universal standards and architecture to facilitate controlled access to the data contained therein.[xxxiv]
  • Provide individuals with notice and choice whenever big data is used to make a decision regarding a particular individual.[xxxv]
  • Where individuals may not want certain data to enter the big data ecosystem, allow them to create standardized data use profiles that must be honored by data collectors. Such profiles could prohibit the data collector from sharing any information associated with such individuals or their devices.
  • Require reasonable security measures to protect data and any findings derived from big data, including encryption requirements.[xxxvi] 
  • Regulate inappropriate uses or disclosures of research information, and make parties liable for any adverse consequences of privacy violations.[xxxvii]

By offering these suggestions for public debate, the WH and PCAST reports have only initiated the discussion of a new approach to privacy, big data and biomedical research. Plainly, these proposals bring with them numerous questions and issues that must be answered and resolved before any transition can be contemplated (notably, what are appropriate uses and who determines this?).

Conclusion

Technologies utilizing the iOS 8 SDK, including HealthKit and HomeKit, illustrate the technological growth contributing to the big data environment. The WH and PCAST reports exemplify the endless possibilities that can be derived from this environment, as well as some of the important privacy issues affecting our ability to harness these possibilities. The reports constitute their authors’ consensus view that the existing approach to big data and biomedical research restricts the true potential big data can have on research, while providing individuals with little-to-no meaningful privacy protections. Whether the suggestions contained in the WH and PCAST reports will be – or should be – further developed is an open question that will undoubtedly lead to a healthy debate. Yet, in the case of the PCAST Report, the sheer diversity of players recognizing big data’s potential and associated privacy implications – including, but not limited to, leading representatives and academics from the Broad Institute of Harvard and MIT, UC-Berkeley, Microsoft, Google, National Academy of Engineering, University of Texas at Austin, University of Michigan, Princeton University, Zetta Venture Partners, National Quality Forum and others – provides hope that this potential will one day be realized – in a way that appropriately protects our privacy.[xxxviii]

WH Report Summary: click here.

PCAST Report Summary: click here.

Article By:

Of:

[i] WH Report, p. 2.

[ii] See Apple’s June 2, 2014, press release, Apple Releases iOS 8 SDK With Over 4,000 New APIs, last found at http://www.apple.com/pr/library/2014/06/02Apple-Releases-iOS-8-SDK-With-Over-4-000-New-APIs.html.

[iii] Id.

[iv] Id.

[v] Id.

[vi] Id.

[vii] The White House and PCAST issued summaries of their respective reports, including their policy recommendations, which can be easily found at the links following this article.

[viii] WH Report, p. 7.

[ix] WH Report, p. 2.

[x] WH Report, p. 5.

[xi] WH Report, p. 23.

[xii] WH Report, p. 3.

[xiii] WH Report, p. 23.

[xiv] WH Report, p. 6 (the WH Report includes two research-related examples of the impact of big data on research, including a study whereby the large number of data sets made “the critical difference in identifying the meaningful genetic variant for a disease.”).

[xv] WH Report, p. 23.

[xvi] WH Report, p. 23.

[xvii] WH Report, p. 23.

[xviii] In privacy law, “anonymous” data is often considered a subset of “de-identified” data. “Anonymized” data means the data has been de-identified and is incapable of being re-identified by anyone. “Pseudonymized” data, the other primary subset of “de-identified” data, replaces identifying data elements with a pseudonym (e.g., random id number), but can be re-identified by anyone holding the key. If the key was destroyed, “pseudonymized” data would become “anonymized” data.

[xix] PCAST Report, p. 38.

[xx] PCAST Report, p. 38.

[xxi] WH Report, p. 23.

[xxii] WH Report, p. 23.

[xxiii] WH Report, p. 8.

[xxiv] WH Report, p. 54; PCAST Report, pp. 38-39.

[xxv] WH Report, p. 8.

[xxvi] The PCAST Report does recognize that de-identification can be “useful as an added safeguard.” SePCAST Report, p. 39. Further, other leading regulators and academics consider de-identification a key part of protecting privacy, as it “drastically reduces the risk that personal information will be used or disclosed for unauthorized or malicious purposes.“ Dispelling the Myths Surrounding De-identification: Anonymization Remains a Strong Tool for Protecting Privacy, Ann Cavoukian, Ph.D. and Khaled El Emam, Ph.D. (2011), last found at http://www.ipc.on.ca/images/Resources/anonymization.pdf. Drs. Cavourkian and El Emam argue that “[w]hile it is clearly not foolproof, it remains a valuable and important mechanism in protecting personal data, and must not be abandoned.” Id.

[xxvii] WH Report, p. 54.

[xxviii] PCAST Report, p. 38; WH Report, p. 54.

[xxix] This approach is not one of the official policy recommendations contained in the WH Report. However, as discussed above, the WH Report discusses the impact of big data on biomedical research, as well as this new approach, extensively. Further, to the extent order has any meaning, the first recommendation made in the PCAST Report is that “[p]olicy attention should focus more on the actual uses of big data and less on its collection and analysis.” PCAST Report, pp. 49-50.

[xxx] WH Report, p. 56.

[xxxi] WH Report, p. 24.

[xxxii] WH Report, p. 23.

[xxxiii] PCAST Report, p. 44.

[xxxiv] WH Report, p. 24.

[xxxv] PCAST Report, pp. 48-49.

[xxxvi] PCAST Report, p. 49.

[xxxvii] PCAST Report, pp. 49-50.

[xxxviii] It must be noted that many leading regulators and academics have a different view on the importance and role of notice and consent, and argue that these principles in fact deserve more focus. Seee.g.The Unintended Consequences of Privacy Paternalism, Ann Cavoukian, Ph.D., Dr. Alexander Dix, LLM, and Khaled El Emam, Ph.D. (2014), last found at http://www.privacybydesign.ca/content/uploads/2014/03/pbd-privacy_paternalism.pdf.

Risky Business: Target Discloses Data Breach and New Risk Factors in 8-K Filing… Kind Of

MintzLogo2010_Black

After Target Corporation’s (NYSE: TGT) net earnings dropped 46% in its fourth quarter compared to the same period last year, Target finally answered the 441 million dollar question – To 8-K, or not to 8-K?  Target filed its much anticipated Current Report on Form 8-K on February 26th, just over two months after it discovered its massive data breach.

In its 9-page filing, Target included two introductory sentences relating to disclosure of the breach under Item 8.01 – Other Events:

During the fourth quarter of 2013, we experienced a data breach in which certain payment card and other guest information was stolen through unauthorized access to our network. Throughout the Risk Factors in this report, this incident is referred to as the ‘2013 data breach’.

Target then buried three new risk factors that directly discussed the breach apparently at random within a total of 18 new risk factors that covered a variety of topics ranging from natural disasters to income taxes.  Appearing in multiple risk factors throughout the 8-K were the following:

  • The data breach we experienced in 2013 has resulted in government inquiries and private litigation, and if our efforts to protect the security of personal information about our guests and team members are unsuccessful, future issues may result in additional costly government enforcement actions and private litigation and our sales and reputation could suffer.
  • A significant disruption in our computer systems and our inability to adequately maintain and update those systems could adversely affect our operations and our ability to maintain guest confidence.
  • We experienced a significant data security breach in the fourth quarter of fiscal 2013 and are not yet able to determine the full extent of its impact and the impact of government investigations and private litigation on our results of operations, which could be material.

An interesting and atypically relevant part of Target’s 8-K is the “Date of earliest event reported” on its 8-K cover page.  Although Target disclosed its fourth quarter 2013 breach under Item 8.01, Target still listed February 26, 2014 as the date of the earliest event reported, which is the date of the 8-K filing and corresponding press release disclosing Target’s financial results.  One can only imagine that this usually benign date on Target’s 8-K was deliberated over for hours by expensive securities lawyers, and that using the February earnings release date instead of the December breach date was nothing short of deliberate.  Likely one more subtle way to shift the market’s focus away from the two-month old data breach and instead bury the disclosure within a standard results of operations 8-K filing and 15 non-breach related risk factors.

To Target’s credit, its fourth quarter and fiscal year ended on February 1, 2014, and Target’s fourth quarter included the entirety of the period during and after the breach through February 1.  Keeping that in mind, Target may not have had a full picture of how the breach affected its earnings in the fourth quarter until it prepared its fourth quarter and year-end financial statements this month.  Maybe the relevant “Date of earliest event” was the date on which Target was able to fully appreciate the effects of the breach, which occurred on the day that it finalized and released its earnings on February 26.  But maybe not.

Whatever the case may be, Target’s long awaited 8-K filing is likely only a short teaser of the disclosure that should be included in Target’s upcoming Form 10-K filing.

Article by:

Adam M. Veness

Of:

Mintz, Levin, Cohn, Ferris, Glovsky and Popeo, P.C.

California Continues to Shape Privacy Standards: Song-Beverly Act Extended to Email Addresses

Womble Carlyle

 

Executive Summary: California retailer restricted from requiring a customer email address as part of a credit card transaction. We knew that asking for zip codes is intrusive personal questioning, and now asking for email has been added to the list.

California’s Song-Beverly Credit Card Act (Cal. Civ. Code Sec. 1747 et seq.) (“Song-Beverly Act” or “Act”) restricts businesses from requesting, or requiring, as a condition to accepting credit card payments that the card holder provide “personal identification information” that is written or recorded on the credit card transaction form or otherwise. “Personal identification information” means “information concerning the cardholder,other than information set forth on the credit card, and including, but not limited to, the card holder’s address and telephone number.” The California Supreme Court has previously ruled that zip codes are also “personal identification information” under the Song-Beverly Act. See Pineda (Jessica) v. Williams-Sonoma Stores, Inc., 2011 Cal. LEXIS 1502 (Cal. Feb. 10, 2011).

Recently, a United States federal district court in California expanded “personal identification information” to include email addresses in a decision denying retailer Nordstrom’s motion to dismiss claims it violated the Song-Beverly Act. The plaintiff sued Nordstrom for collecting his email address as part of a credit card transaction at one of its California stores in order to email him a receipt, then subsequently using his email address to send him frequent, unsolicited marketing emails. See Capp v. Nordstrom, Inc., 2013 U.S. Dist. LEXIS 151867, 2013 WL 5739102 (E.D. Cal. Oct. 21, 2013).

Raising a case of first impression under California law, Nordstrom claimed that email addresses are not “personal identification information” under the Song-Beverly Act, so the Act did not apply. The court disagreed with Nordstrom and found the opposite based on the California Supreme Court’s earlier ruling in Pineda. Nordstrom’s argument that email addresses can readily be changed, unlike zip codes, and consumers can have multiple email addresses was not persuasive. The court held that an email address regards a card holder in a more personal and specific way than a zip code. Unlike a zip code that refers to the general area where a card holder works or lives, email permits direct contact with the consumer and implicates their privacy interests. The court concluded that the collection of email addresses is contrary to the Song-Beverly Act’s purpose to guard against misuse of personal information for marketing purposes. In particular, the plaintiff’s allegation that his email address was collected to send him a receipt and then used to send him promotional emails directly implicates the protective purposes of the Act as interpreted in Pineda.

Pineda held that zip codes are personal information for purposes of the Song-Beverly Act, and therefore a brick and mortar retailer violated the Act when it requested and recorded such data. In the Pineda decision, the California Supreme Court found that zip codes, like the card holder’s address expressly called out as “personal identification information” under the Act, were unnecessary to completing the credit card transaction and inconsistent with the protective purpose of the Act. This is especially true when a zip code is collected to be used with the card holder’s name in order to locate the card holder’s address, permitting a retailer to locate indirectly what it is prohibited from obtaining directly under the Act.

Nordstrom also argued that the plaintiff’s claims under the Song-Beverly Act were preempted by the federal “Controlling the Assault of Non-Solicited Pornography and Marketing Act” (better known as the CAN-SPAM Act), but the court disagreed. While the CAN-SPAM Act contains a preemption provision, it only preempts state laws that regulate the manner in which email messages are sent and their content, both of which are not regulated under the Song-Beverly Act.

Retailer tip: The federal court issuing this most recent decision recommends waiting to request an email address (or a zip code) until after the consumer has the receipt from their credit card transaction in hand, and then sending the consumer emails only in conformance with the CAN-SPAM Act.

In the wake of Pineda, retailers faced class action lawsuits for requesting consumer zip codes at check out. This new decision could have a similar effect.

Article by:

Of:

Womble Carlyle Sandridge & Rice, PLLC

To 8-K, or not to 8-K? For Target, that is indeed the question.


MintzLogo2010_Black

As anyone with a pulse and a computer, television or carrier pigeon knows, Target Corporation (NYSE: TGT) suffered a major data breach in December – the extent of which is still being uncovered – and pegs the latest number of customers that have had their personal information stolen anywhere from 70 to 110 million.  As a public company, a breach of this magnitude should be material enough to warrant a Form 8-K filing, right?  As of this post, Target doesn’t seem to think so.

Form 8-K contains mandatory disclosure requirements when certain enumerated events occur, as in the entry into a material definitive agreement (Item 1.01) or the resignation of a director (Item 5.02).  Reporting an event such as the Target data breach would likely fall under Item 8.01 of Form 8-K, which is used to report “Other Events.”  Item 8.01 permits the registrant, at its option, to disclose any events not otherwise called for by another Form 8-K Item that the registrant “deems of importance to security holders,” and is an entirely voluntary filing.

Although filing under Item 8.01 of Form 8-K is voluntary, other companies that have suffered smaller data breaches have opted to file an 8-K to disclose such breaches, including The TJX Companies, Inc.’s (NYSE: TJX) breach disclosed in an 8-K in January, 2007, and Morningstar, Inc.’s (NASDAQ: MORN) more recent breach disclosed in an 8-K in July, 2013.  Target’s securities lawyers may believe that the breach is not “important to security holders,” or  is not sufficiently material enough to the roughly $38 billion company to warrant the filing of an 8-K, but 70 to 110 million affected customers is hardly immaterial, even for Target.   In a statement released January 10, Target warned that the costs related to the breach “may have a material adverse effect on Target’s results of operations in fourth quarter 2013 and/or future periods.”

Indeed, Target evidently determined when filing its Form 10-K for 2012 that the risk of a data security breach was material enough to warrant disclosure in its risk factors:

If our efforts to protect the security of personal information about our guests and team members are unsuccessful, we could be subject to costly government enforcement actions and private litigation and our reputation could suffer.”

The nature of our business involves the receipt and storage of personal information about our guests and team members. We have a program in place to detect and respond to data security incidents. To date, all incidents we have experienced have been insignificant.  If we experience a significant data security breach or fail to detect and appropriately respond to a significant data security breach, we could be exposed to government enforcement actions and private litigation. In addition, our guests could lose confidence in our ability to protect their personal information, which could cause them to discontinue usage of REDcards, decline to use our pharmacy services, or stop shopping with us altogether. The loss of confidence from a significant data security breach involving team members could hurt our reputation, cause team member recruiting and retention challenges, increase our labor costs and affect how we operate our business.” (emphasis added)

Of course, there is no time limit for filing under Item 8.01 of Form 8-K due to it being a voluntary filing, so a filing may still be forthcoming from Target.  In any event, one can only imagine that the risk factor language above will look very different in Target’s next Form 10-K filing in two months.

Article by:

Of:

Mintz, Levin, Cohn, Ferris, Glovsky and Popeo, P.C.

New Online Privacy Policy Requirements Take Effect January 1, 2014

VedderPriceLogo

 

California Online Privacy Protection Act (CalOPPA)

Owners of websites, online services or mobile applications (apps) that can be accessed or used by California residents should ensure their compliance with the new amendments to the California Online Privacy Protection Act of 2003 (CalOPPA) by the law’s January 1, 2014 effective date.  The borderless nature of the Internet makes this law applicable to almost every website or online service and mobile application.  Accordingly, companies should review and revise their online privacy policies to ensure compliance with the new law and avoid potentially significant penalties.

Previously, CalOPPA required the owner of any website or online service operated for commercial purposes (an “operator”) that collects California residents’ personally identifiable information (PII) to conspicuously post a privacy policy that met certain content requirements, including identifying the types of PII collected and the categories of third parties with whom that information is shared. The new law requires that companies subject to CalOPPA provide the following additional disclosures in their privacy policies.

  • How an operator responds to “do not track” signals from Internet browsers and any other mechanism that provides consumers a choice regarding the collection of PII about an individual consumer’s online activities over time and across third-party websites and online services.  A company may satisfy this requirement by revising its privacy policy to include the new disclosures or by providing a clear and conspicuous hyperlink to a webpage that contains a description of any program or protocol the company follows to provide consumers a choice about tracking, including the effects of the consumer’s choice.
  • An affected company must disclose to users whether third parties may collect PII about a user’s online activities over time and across different websites when a consumer uses the operator’s website or online service. However, an operator is not required to disclose the identities of such third parties.

The California law does not require that operators honor a user’s “do not track” signals. Instead, operators must only provide users with a disclosure about how the website or mobile app will respond to such mechanisms. “Do not track” mechanisms are typically small pieces of code, similar to cookies, that signal to websites or mobile apps that the user does not want his or her website or app activities tracked by the operator, including through analytics tools, advertising networks, and other types of data collection and tracking practices.  Further, the Privacy Enforcement and Protection Unit of the California Office of the Attorney General recently stated that the required disclosures should not be limited to tracking simply for online behavioral advertising purposes, but those disclosures must extend to any other purpose for which online behavioral data is collected by a business’s website (e.g., market research, website analytics, website operations, fraud detection and prevention, or security).

A violation of the law can result in a civil fine of up to $2,500 per incident. The California Attorney General maintains that each noncompliant mobile app download constitutes a single violation and that each download may trigger a fine.

Given that most company websites will have California visitors, companies should consider taking the following steps to ensure compliance with the CalOPPA amendments by January 1, 2014:

  • Identify the tracking mechanisms in place on your company’s websites and online services, including (a) the specific types of PII collected by the tracking mechanism and (b) whether users have the option to control whether and how the mechanisms are used and how the website responses responds to “do not track” signals by seeking input from those familiar with your website, including (i) technicians and developers who understand the mechanics of how the website operates, including how it responds to “do not track signals,” (ii) financial and marketing personnel who understand how user PII is monetized, and (iii) any other stakeholders who access or handle user PII.
  •  Review the practices of any third parties that have the ability to track users on your website. To draft the new disclosures, you will need to understand how those third parties track your users and whether they are capable of doing so before or after the users leave your service.
  • Incorporate the information identified above to modify your online privacy policy to include the required behavioral tracking disclosures.
  • Retain the prior version of the policy in your records, including the date on which each version was posted to the site. The new version should have an updated effective date to distinguish it from the previous version.

Expansion of California’s Data Breach Notification Requirements

Under another new law taking effect on January 1, 2014, California will expand its data breach notification requirements by adding new types of information to the definition of “personal information” under California Civil Code §§ 1798.29 and 1798.82. The new law requires notification if a California resident’s personal information is compromised, and, as with CalOPPA, the breach notification requirements apply regardless of the location of the organization that sustains the breach.  Therefore, to the extent that your business collects and retains California residents’ PII, then the amended California breach notification law would apply.

Previously, the California law required notification of a data breach in the event of the unauthorized access to or disclosure of an individual’s name, in combination with that individual’s (i) Social Security number, (ii) driver’s license or California ID number, (iii) account, credit or debit card number, together with a security or access code, (iv) medical information, or (v) health information, where either the name or the other piece of information was not encrypted. Under the new definition, “personal information” will also include “[a] user name or email address, in combination with a password or security question and answer that would permit access to an online account.”

Accordingly, if your business or organization collects this type of information, then it should consider undertaking the following proactive measures to reduce the risk and magnitude of a potential data breach:

  • Periodically and systematically delete nonessential personal information. By deleting obsolete PII and other sensitive information, businesses can significantly reduce the risk of a breach.  Retaining such obsolete legacy PII serves no business purpose, but only adds unnecessary exposure and potential liability.
  • Conduct a PII inventory and perform a risk assessment of your security measures.  Identify what PII is being collected by your organization, where it is retained, who has access to the PII and  the security measures to protect the PII.  Ensuring that sufficient protections are in place may not prevent every incident, but they can reduce the possibility of an incident occurring in the first place and limit the disruption to your business if there is a breach.
  • Limit the disclosure of PII to third parties only when necessary to provide services or products. You can be equally responsible for a data breach notification if the person or entity who experiences the data breach was a third party who received PII from you. Any vendor or third party with whom you share PII should contractually represent and warrant that they have in place certain standards for protecting that information and agree to indemnify your company for any loss that results from a breach.

 

Article by:

Of:

Vedder Price

To Track or Not to Track Re: Digital Advertising

McDermottLogo_2c_rgb

Digital advertising based on tracking users’ interests and related privacy concerns have been the subject of many recent news articles.  What does this mean for businesses?  Evolving industry practices and new legislation relating to online privacy and user tracking likely require changes to online privacy practices and policies.

Online privacy and user tracking are in the news almost daily.  Consider these highlights from the past few weeks about online tracking of California minors, big data brokers, California legislation addressing “do not track,” new mobile and online interest-based advertising technology, and a warning to all website operators from the Better Business Bureau:

New Privacy Rights for California Minors

On September 23, 2013, Governor Brown signed into law new Sections 22580 through 22582 of the California Business and Professions Code titled “Privacy Rights for California Minors in the Digital World.”  The new law, which goes into effect January 1, 2015, requires an operator of a website (including online services and applications, such as a social media site) or mobile application that is “directed to minors” to allow minors (defined as anyone younger than 18 years old residing in California) who are registered users the opportunity to un-post or remove (or request removal of) their posted online content.  The operator also must provide minors with notice and “clear instructions” about how to remove their posted content.  The operator is not, however, required to remove posted content in certain specific circumstances, such as when the content was posted by a third party.

This new law also prohibits website and mobile app operators from advertising to California minors certain products and services that minors cannot legally purchase, such as alcoholic beverages, firearms, ammunition, spray paint, tobacco products, fireworks, tanning services, lottery tickets, tattoos, drug paraphernalia, electronic cigarettes, “obscene matter” and lethal weapons.  Operators also are prohibited from using, disclosing or compiling certain personal information about the minor for the purpose of marketing these products or services.

Senator Rockefeller Expands Investigation of Data Brokers

On September 25, 2013, Governor Rockefeller (W.VA) announced that he sent letters to 12 operators of popular family-, health- and personal-finance-related consumer websites requesting details about whether and what information collected from consumers is shared with data brokers.  In his letter to the operator of self.com, for example, Rockefeller noted that “[w]hile some consumers may not object to having their information categorized and used for marketing purposes, before they share personal information it is important that they know it may be used for purposes beyond those for which they originally provided it.”

California Adds Do-Not-Track Disclosure Requirements Effective January 1, 2014

On September 27, 2013, California Governor Brown signed into law amendments to the California Online Privacy Protection Act (CalOPPA), a 2004 law requiring all commercial websites and online service providers collecting personally identifiable information about California residents to “conspicuously” post a “privacy policy.”  The amendments to CalOPPA, which take effect on January 1, 2014, add two new disclosure requirements for privacy policies required by CalOPPA:

  • The privacy policy must explain how the website “responds to ‘Do Not Track’ signals from web browsers or other mechanisms that provide California residents the ability to exercise choice” about collection of their personally identifiable information (Cal Bus and Prof Code §22575(b)(5)).
  • The privacy policy must disclose whether third parties use or may use the website to track (i.e., collect personally identifiable information about) individual California residents “over time and across third-party websites” (Cal Bus and Prof Code §22575(b)(6)).

The “Bill Analysis” history indicates that CalOPPA amendments are not intended to “prohibit third-party or any other form of online tracking” but rather to “implement a uniform protocol for informing Internet users about tracking . . . and any options they may have to exercise choice . . .” (6/17/13 – Senate Judiciary).

A website operator may meet the “do not track” disclosure requirement by including a link in the privacy policy to “an online location containing a description, including the effects, of any program or protocol the operator follows that offers the consumer that choice” (Cal Bus and Prof Code §22575(b)(7)).

The reference in §22575(b)(7) to “an online location” suggests that businesses already complying with the “enhanced notice link” requirements of the Self-Regulatory Program for Online Behavioral Advertising of the Digital Advertising Alliance (DAA) will comply with amended CalOPPA.  Among other requirements, the DAA’s self-regulatory program requires website owners/operators (called “First Parties”) to provide “clear, meaningful and prominent” disclosure about data collection and use for advertising purposes, and to offer consumers a way to opt out of tracking, such as through the DAA’s consumer choice page.  As noted in the Bill Analyses, while the DAA’s consumer choice mechanism enables consumers to opt out of receiving advertising based on online tracking data, it only works for companies that participate in the DAA’s program and “does not allow consumers not to be tracked.”

User Credentials Subject to California Breach Laws Effective January 1, 2014

Governor Brown also signed into law amendments to California’s breach notification laws on September 27, 2013.  As amended, the definition of “personal information” that triggers breach notification requirements includes consumers’ online credentials: “user name or email address, in combination with a password or security question and answer that would permit access to an online account.”

Mobile Advertising: Mobile Telephone as Tracking Device

In the October 6, 2013, edition of the New York Times, an article titled “Selling Secrets of Phone Users to Advertisers” describes sophisticated profiling techniques for mobile phone users that feed on data collected through partnerships with other various online service providers.  These companies are developing alternatives for cookies, which do not work on mobile devices and, as the new California law illustrates, are increasingly irrelevant as an online tracking technique because users can block or delete them.

New Tracking Technology from Microsoft and Google

On October 9, 2013, AdAge reported that Microsoft is developing a new kind of tracking technology to replace cookies.  The new technology would function as a “device identifier,” allowing user tracking across devices that use Microsoft Windows, Xbox, Internet Explorer, Bing and other Microsoft services.  Similarly, USA Today reported that Google is developing its own digital tracking mechanism known as “AdID.”  While both of these new trackers will be used to collect and aggregate date for advertising and marketing purposes, they purportedly will offer users more control over how and what online activity is tracked and who has access to their personal data.

Better Business Bureau Issues Compliance Warning to Website Operators

On October 14, 2013, the Better Business Bureau issued a Compliance Warning noting that a “significant minority of website operators” are omitting the “enhanced notice link” (as required by the DAA’s Self-Regulatory Program for Online Behavioral Advertising) when ad networks and other third parties collect data for interest-based advertising purposes but cannot provide their own notice on the website on which the data collection occurs.  The Better Business Bureau operates the Online Interest-Based Advertising Accountability Program, through which it monitors businesses’ advertising practices and enforces the DAA’s self-regulatory program, even for companies that are not participating in it.

All of this news has created consumer confusion.  While consumers are increasingly aware of being tracked, they don’t know what exactly it means or which websites are doing it—and they are not happy about it.  A study from data privacy company TRUSTe found that 80 percent of consumers are aware of being tracked and 52 percent don’t like it.

What to Do?

A check-up for the privacy policy (or “privacy statement,” which is the increasingly popular industry term) posted on your company’s website is a good way to start evaluating your company’s digital advertising and privacy practices.  The online privacy statement is the primary means by which website operators (also known as “publishers”) communicate their privacy practices to users.

These Four steps can help you successfully evaluate your company’s privacy statement:

First, find out if your company’s marketing strategy includes advertising based on consumer information collected through cookies or other tracking technology.  Even if this type of advertising is not part of current plans, your company’s website still may have third-party tracking activities occurring on it, and these activities must be disclosed in the privacy statement as of January 1, 2014.

Second, review the privacy statement displayed on your company’s website(s) and/or mobile application(s) and make sure it accurately, clearly and completely discloses the information collected from users, how it is collected (e.g., by your company or by third parties), how your company uses the information, and whether and how the information is disclosed to third parties.  If you use information that you collected from consumers for targeted advertising, make sure the privacy statement says so.  A federal judge in the Northern District of California recently reviewed a company’s online privacy policy to evaluate whether users reading the privacy policy would understand that they were agreeing to allow user profiles and targeted advertising based on the contents of their e-mails.  The court found that the lack of specificity in the company’s privacy policy about e-mail interception meant that users could not and did not consent to the practices described in the online privacy policy.

Third, find out when and how the privacy statement is or was presented to users who provide personal information through the company website(s) and/or mobile application(s).  Is the privacy statement presented as a persistent link in the footer of each webpage?  Are users required to agree to the privacy statement?  If not, consider implementing a mechanism that requires users to do so before providing their personal information.

Finally, if your privacy statement needs to be updated, make sure you notify all consumers in advance and ensure that the changes you propose are reasonable.  Unreasonable and overbroad changes made after the fact can cause reputational harm.  Instagram learned this at the end of 2012 when it tried to change its terms of service so that users’ photos could be used “in connection with paid or sponsored content or promotions, without any compensation to [the user].”  After a hail of consumer complaints, Instagram withdrew the revised terms and publicized new, more reasonable ones.

Article By:

of

California Enacts New Data Privacy Laws

Sheppard Mullin 2012

As part of a flurry of new privacy legislation, California Governor Jerry Brown signed two new data privacy bills into law on September 27, 2013: S.B. 46 amending California’s data security breach notification law and A.B. 370 regarding disclosure of “do not track” and other tracking practices in online privacy policies. Both laws will come into effect on January 1, 2014.

New Triggers for Data Security Breach Notification

California law already imposes a requirement to provide notice to affected customers of unauthorized access to, or disclosure of, personal information in certain circumstances. S.B. 46 adds to the current data security breach notification requirements a new category of data triggering these notification requirements: A user name or email address, in combination with a password or security question and answer that would permit access to an online account.

Where the information subject to a breach only falls under this new category of information, companies may provide a security breach notification in electronic or other form that directs affected customers to promptly change their passwords and security questions or answers, as applicable, or to take other steps appropriate to protect the affected online account and all other online accounts for which the customer uses the same user name or email address and password or security question or answer. In the case of login credentials for an email account provided by the company, the company must not send the security breach notification to the implicated email address, but needs to provide notice by one of the other methods currently provided for by California law, or by clear and conspicuous notice delivered to the affected user online when the user is connected to the online account from an IP address or online location from which the company knows the user ordinarily accesses the account.

Previously, breach notification in California was triggered only by the unauthorized acquisition of an individual’s first name or initial and last name in combination with one or more of the following data elements, when either the name or the data elements are unencrypted: social security number; driver’s license or state identification number; account, credit card or debit card number in combination with any required security or access codes; medical information; or health information. S.B. 46 not only expands the categories of information the disclosure of which may trigger the requirement for notification, it also—perhaps unintentionally—requires notification of unauthorized access to user credential information even if that information is encrypted. Thus, S.B. 46 significantly expands the circumstances in which notification may be required.

New Requirements for Disclosure of Tracking Practices

A.B. 370 amends the California Online Privacy Protection Act (CalOPPA) to require companies that collect personally identifiable information online to include information about how they respond to “do not track” signals, as well as other information about their collection and use of personally identifiable information. The newly required information includes:

  • How the company responds to “do not track” signals or other mechanisms that provide consumers the ability to exercise choice over the collection of personally identifiable information about their online activities over time and across third-party websites or online services, if the company collects such information; and
  • Whether third parties may collect personally identifiable information about a consumer’s online activities over time and across different websites when a consumer uses the company’s website.

These disclosures have to be included in a company’s privacy policy. In order to comply with the first requirement, companies may provide a clear and conspicuous hyperlink in their privacy policy to an online description of any program or protocol the company follows that offers the user that choice, including its effects.

It’s important to note that the application of CalOPPA is broad. It applies to any “operator of a commercial Web site or online service that collects personally identifiable information through the Internet about individual consumers residing in California who use or visit its commercial Web site or online service.” As it is difficult to do business online without attracting users in technologically sophisticated and demographically diverse California, these provisions will apply to most successful online businesses.

What to Do

In response to the passage of these new laws, companies should take the opportunity to examine their data privacy and security policies and practices to determine whether any updates are needed. Companies should review and, if necessary, revise their data security breach plans to account for the newly added triggering information as well as the new notification that may be used if that information is accessed. Companies who collect personally identifiable information online or through mobile applications should review their online tracking activities and their privacy policies to determine whether and what revisions are necessary. The California Attorney General interprets CalOPPA to apply to mobile applications that collect personally identifiable information, so companies that provide such mobile apps should remember to include those apps in their review and any update.

Article By:

 of

Will a New California Ballot Initiative Usher in the Next National Shift in Privacy Law?

Poyner Spruill

Just 10 years ago, California enacted the first breach notification law and unwittingly transformed the landscape of American privacy and data security law. To date, 45 other states, multiple federal agencies, and even local governments have followed suit. California residents may soon find themselves voting on a ballot initiative that could have an equally dramatic effect on this area of law.

computer broadcast world

The ballot initiative, known as the California Personal Privacy Initiative, is designed to remove barriers to privacy and data security lawsuits and also would promote stronger data security and an “opt-in” standard for the disclosure of personal information. Specifically, the initiative would amend the California Constitution to:

  1. Create a presumption that “personally identifying information” collected for a commercial or governmental purpose is confidential

  2. Require the person collecting such information to use all reasonably available means to protect it from unauthorized disclosure

  3. Create a presumption of harm to a person whenever her confidential personally identifying information has been disclosed without her authorization.

Notwithstanding the presumption of harm, the amendment would permit the disclosure of confidential personally identifying information without authorization “if there is a countervailing compelling interest to do so (such as public safety or protected non-commercial free speech) and there is no reasonable alternative for accomplishing such compelling interest.”

Turning first to the impact on litigation, plaintiffs have largely been unsuccessful in privacy and data security litigation because they have failed to show harm resulting from an alleged unlawful privacy practice or security breach. The obligation to show harm arises at two stages when a case is litigated in federal court: first, the plaintiff must establish that he has suffered an “injury in fact” in order to meet the requirements for Article III standing, and second, the plaintiff must satisfy the harm requirement that applies to the relevant cause of action (e.g., negligence). If the case is litigated in state court, the standing requirement does not apply, but most, if not all, privacy and data security breach class actions have been litigated in federal court.

The ballot initiative would create a presumption of harm that could allow more lawsuits to satisfy the injury-in-fact standard (step one, above) and the harm requirement for the underlying cause of action (step two, above). Without that barrier, business would be stripped of the most effective means of prevailing on a motion to dismiss for certain causes of action. And in some scenarios, business would be forced to rely on untested or tenuous defenses, making companies more likely to settle, rather than fight, previously unsustainable causes of action.

Other components of the initiative would exacerbate the uptick in litigation, including the presumption that personally identifying information collected for a commercial purpose is confidential and the requirement that organizations use reasonable measures to prevent unauthorized disclosure of that information. Plaintiffs’ claims are sometimes based on an allegation that promises made in the defendant’s privacy notice regarding security measures are deceptive. Currently, companies can protect themselves against these claims by making only conservative representations about privacy and security. But the ballot initiative could create a general duty to adopt reasonable privacy and security measures, raising the prospect that plaintiffs could more successfully pursue negligence-style claims, which companies cannot deter solely by adopting conservative privacy notices.

The initiative also employs a very broad definition of personally identifying information: “any information which can be used to distinguish or trace a natural person’s identity, including but not limited to financial and/or health information, which is linked or linkable to a specific natural person.” (The definition does not cover publicly available information lawfully made available to the public from government records.) This expansive definition would force organizations to apply stricter security to types of information that might not otherwise receive those protections. Furthermore, the definition is particularly problematic when considered in conjunction with the presumption of harm discussed above because identifiable data such as names, email addresses, and device identifiers are routinely shared by businesses without consent. If this initiative succeeds, the increased threat of litigation will incentivize businesses to default to an opt-in standard for disclosures of information.

There is, however, at least one reason to believe that the initiative may not be as detrimental to business interests as some are predicting. Showing a nominal harm for the underlying cause of action does not necessarily equate to an award of damages so, even if the ballot initiative is successful, there would in some cases remain a practical limitation on the plaintiff’s ability to recoup money damages. Where statutory damages are available, or where a plaintiff can show some actual monetary harm, money awards would be possible. But in cases where statutory damages are not available and a plaintiff must show actual monetary harm to procure a monetary award, the ballot initiative may not save such claims. For example, the damages award flowing from a negligence claim is generally based on the actual damages incurred by a plaintiff. Therefore, even if the plaintiff could state a cause of action for the purpose of defeating a motion to dismiss, the plaintiff may not be entitled to anything more than a nominal damages award if the plaintiff cannot demonstrate monetary damage such as the cost of credit monitoring, identity theft insurance, or perhaps even therapy bills. On the other hand, courts could interpret the amendment as requiring recognition of a new type of harm, similar to emotional distress, that is compensable through money damages—even without a showing of some concrete financial harm to the plaintiff.

The ballot initiative’s proponents must obtain 807,615 signatures before Californians would have the opportunity to vote on it. If the signatures are collected, then the initiative will appear on the ballot without further opportunity to seek amendments to address business concerns. If the initiative appears on the ballot, it would require only a simple majority vote to pass. Interested organizations should work to ensure that public debate over the initiative includes a discussion of the heavy burden on business that could result from the initiative.

 
 of