Google Must Face Most Claims in Keyword Wiretap Class Action

MintzLogo2010_Black

If you were on Google’s home page yesterday at the office, you probably spent more time than you care to admit playing the “help the letter ‘g’ hit the piñata” game that Google created for its 15th birthday.

For Google, that might be a welcome distraction from very bad news it received from the Northern District of California.  U.S. District Court Judge Lucy Koh denied in part Google’s motion to dismiss a 2010 claim in which users accuse Google of violating various state and federal laws by scanning the content of user emails for purposes of creating user profiles and directing targeted advertising, thus allowing a putative class action suit against the search (and everything else online) giant to proceed.

Judge Koh’s order (full text can be found here), is significant in its handling of a number of Google’s arguments, but the rejection of a particular line of argument is understandably receiving much of the attention. In its Motion to Dismiss, Google argued that its practice of scanning emails is not a violation of the Federal Wiretap Act because, among other reasons, Gmail users and non-Gmail users have consented to the interception of emails.   Google’s consent argument was two-fold.  First, it argued that Gmail users had “expressly consented” to having their emails scanned by agreeing to its Terms of Service and Privacy Policies, which every Gmail users is required to do.  Second, it argued that non-Gmail users have “impliedly consented” to the practice by sending an email to a Gmail user, because at that time those non-users understood how Gmail services operate.

Judge Koh rejected both of Google’s consent arguments, holding that the Court “cannot conclude that any party – Gmail users or non-Gmail users – has consented to Google’s reading of email for the purposes of creating user profiles or providing targeted advertising.”  The Court dug into the multiple iterations of Google’s Terms of Service and Privacy Policies that have been in place since 2007, and found that the policies did not explicitly notify users that Google would intercept emails for the purposes or creating user profiles and targeting advertisements.  The Court discussed a number of sections of Google’s policies where users allegedly consented to the practice of scanning emails for advertising purposes, and in each case found that the policies either described a different purpose for scanning emails (such as filtering out objectionable content) or were unclear when describing what kind of information would be intercepted (using descriptions like “information stored on the Services” or “information you provide”).  The Court further held that Google’s current policies (which were put in place on March 1, 2012) are equally ineffective at establishing consent.  Finally, the Court rejected the argument that non-Gmail users had impliedly consented to the interception of emails, noting that accepting Google’s theory of implied consent would “eviscerate” laws prohibiting interception of communications.

Judge Koh’s denial of Google’s Motion to Dismiss is the latest reminder that when it comes to privacy policies and terms of use, how you write something can be as important as what you write.  We will have more on the various issues discussed in Judge Koh’s order over the next few days.

Article By:

 of

Health Insurance Portability and Accountability Act/Health Information Technology for Economic and Clinical Health (HIPAA/HITECH) Compliance Strategies for Medical Device Manufacturers

Sheppard Mullin 2012

As computing power continues to become cheaper and more powerful, medical devices are increasingly capable of handling larger and larger sets of data. This provides the ability to log ever expanding amounts of information about medical device use and patient health. Whereas once the data that could be obtained from a therapeutic or diagnostic device would be limited to time and error codes, medical devices now have the potential to store personal patient health information. Interoperability between medical devices and electronic health record systems only increases the potential for medical devices to store personal information.

The concern has become so significant that the U.S. Food and Drug Administration recently issued a draft guidance and letter to industry noting concerns associated with theft or loss of medical information by cybersecurity vulnerable devices. For a more detailed discussion of this issue, see last month’s blog post.

This raises another important issue for medical device manufacturers and health care providers: medical device compliance with the Health Insurance Portability and Accountability Act (HIPAA) and the Health Information Technology for Economic and Clinical Health (HITECH) Act. Compliance with HIPAA and HITECH has become a major concern for hospitals and health care providers, and will increasingly be an issue that medical device manufacturers will need to deal with.

A medical device manufacturer needs to answer three questions in order to determine whether the collection of patient information by a medical device is subject to HIPAA and HITECH:

  • Does the information qualify as Protected Health Information?
  • Is a Covered Entity involved?
  • Does a Business Associate relationship exist with a Covered Entity?

Protected Health Information

Protected Health Information (PHI) is individually identifiable health information transmitted or maintained in any form or medium.[1] Special treatment is given to electronic PHI, which is subject to both the HIPAA Privacy Rule, and the Security Rule (which only applies to electronic PHI). To be “individually identifiable,” the PHI must either identify the individual outright, or there must be a reasonable basis to believe that the information can be used to identify the individual.[2]

“Health information” is any information (including genetic information) that is oral or recorded in any form or medium, and meets two conditions.[3] First, the information must be created or received by a health care provider, health plan, public health authority, employer, life insurer, school or university, or health care clearinghouse.[4] Second, the information must relate to the past, present, or future physical or mental health or condition of an individual, or the provision or payment of health care to an individual.[5]

If data collected by a medical device does not meet the definition of “individually identifiable,” or “health information,” it is not covered under HIPAA and HITECH. For example, a medical device that logs detailed medical diagnostic information about a patient, but includes no means by which that information may be traced to the patient, the data would likely fall outside of HIPAA and HITECH. Alternatively, a medical device, such as a mobile medical app, may request that a user provide detailed medical information about himself or herself. Provided that information is requested outside of the context of a health care provider, health plan, public health authority, employer, life insurer, school or university, HIPAA and HITECH similarly would likely not apply.

Covered Entities and Business Associates

There are two types of persons regulated by HIPAA and HITECH: “Covered Entities” and “Business Associates.” A Covered Entity is a health plan, a health care clearinghouse, or a health care provider who transmits any health information in electronic form in connection with a covered transaction.[6] A Business Associate is a person who either creates, receives, maintains, or transmits PHI for a regulated activity on behalf of a covered entity, or provides legal, actuarial, accounting, consulting, data aggregation, management, administrative, accreditation, or financial services to a covered entity, where the service involves the disclosure of PHI.[7]

Therefore, at a minimum, in order to be subject to HIPAA and HITECH a Covered Entity needs to be involved. For example, medical devices sold directly to consumers for personal use would generally not be subject to HIPAA and HITECH.

Conversely, just because a medical device manufacturer is not a “Covered Entity,” HIPAA and HITECH may apply through a Business Associate relationship. Business Associates include Health Information Organizations, E-prescribing Gateways, and others that provide data transmission services with respect to PHI to a covered entity, and that require access on a routine basis to PHI.[8] Business Associates also include persons that offer PHI to others on the behalf of a covered entity, or that subcontract with a Business Associate to create, receive, maintain, or transmit PHI.[9]


[1] 45 C.F.R. § 160.103 “Protected health information”.

[2] 45 C.F.R. § 160.103 “Individually identifiable health information” (2)(i) and (ii).

[3] 45 C.F.R. § 160.103 “Health information”.

[4] 45 C.F.R. § 160.103 “Health information” (1).

[5] 45 C.F.R. § 160.103 “Health information” (2).

[6] 45 C.F.R. § 160.103 “Covered entity”.

[7] 45 C.F.R. § 160.103 “Business associate” (1).

[8] 45 C.F.R. § 160.103 “Business associate” (3)(i).

[9] 45 C.F.R. § 160.103 “Business associate” (3)(ii) and (iii).

Article By:

 of

Brain Spray and the Law

Womble Carlyle

Now that we can capture and use the signals emitted by human brains, we should consider whether brain signals are public property. If your face and voice become available to the public through use, is the same true for your thoughts, when they can be read by others?

Several recent news items have illustrated the progress humans have made in understanding the brain’s workings and harnessing an active brain for practical purposes. For example, this week, Duke University researcher Miguel Nicolelis used microchips and the internet to connect the brains of two mice on different continents, so that the thoughts of one can influence the actions of the other. Much of Dr Nicolelis’s work involves creating an exoskeleton that a paralysed person could operate with brain signals.

Similarly, University of Pittsburgh researcher Andrew Schwartz has been working since 2006 to find ways for a person to control a robotic arm with only brain signals. In February 2013, surgeons implanted four microchips in a paralysed patient’s brain that translate her brain’s signals into movement in robotic equipment. 60 Minutes and ABC News showed a video of the Pittsburgh patient feeding herself ice cream through brain signals to a robotic arm.

Such scientific work involving directed brain signals seems like science fiction, but the technology is available right now, and will only improve over time, and soon will be available commercially. Right now, the most rudimentary brain-driven technology can be purchased. High-end toy emporium Hammecher Schlemmer sells a “Telekinetic Obstacle Course” that use focused brain waves to manoeuvre a ball through an obstacle course. The game comes with a headband to read your brain signals and then wirelessly transmit those signals to the game’s air fan, which increases or decreases speed depending on your signal, blowing a foam ball around an obstacle course.

For example, Australian scientist and entrepreneur Tan Le, the founder of Emotiv Lifescience, has created a headset that serves as an interface for reading the wearer’s brainwaves, making it possible to control virtual and physical objects with directed thoughts. Eventually the headset will be conditioned for diagnostic use, but current products using the brain-interface headset for videogames, allowing users to drive virtual race cars with their concentrated thoughts.

Modern science has identified two types of “brain spray”, or signals that can be harnessed from outside of a person’s skull. The first is the directed thoughts described in the examples above, where certain voluntary brain signals, created by the subject concentrating on a goal or action, are read and translated by either a device worn by the subject or by microchips placed in the subject’s head. Research into this field, including US government funded research by DARPA, may lead to practical solutions allowing wounded veterans or other people with disabilities to grasp, drive, walk and talk again.

This type of brain spray will lead to legal concerns. For example, if a wounded soldier is offered a limb that responds to his thoughts, the company providing the limb will want to capture information from the electronics that capture brain signals, both for understanding and improving the equipment and for monitoring its use. Could a disabled person say “no” to the company who was offering a newly functional life, or would he be forced to sign away his brain spray for benefit of science and a company providing the equipment.

We all know that our signals from laptops and smartphones are captured by any number of companies – telephone signal providers, hardware manufacturers, app developers, banks and payment businesses – when we undertake actions or transactions over the internet. There is no reason that the same rules would not apply to our directed thoughts when our computing devices are controlled by focused brain signals. Google is already testing computing in the form of eyeglasses that could easily be equipped to read such brain spray and turn it into both action and data. Our thoughts would be available to our service providers.

The other brain spray that can be captured and turned to practical use is translation of brain activation signals currently read by functional magnetic resonance imagining machines (fMRI). These signals are more intrusive than the focused brain signals described above, because the fMRI provides pictures of what part of a human brain is activated by situations or stimuli. The fMRI pictures can easily be interpreted as triggers for various emotions. Because certain emotions trigger activity in specific parts of the brain, fMRI brain spray comes close to showing what the subject is feeling about the situation he is in.

Scientists currently read and interpret the emotional and logical meanings of fMRI signals from the human brain. In a 2008 article for Atlantic Monthly, Jeffrey Goldberg submitted himself to brain readings where scientists used MRI scanning to observe which areas of Goldberg’s brain reacted to certain images. The scientist showed Goldberg pictures of personal, political and cultural figures, recording his brain’s involuntary reactions with the MRI machine and noting when his brain activated in areas indicating affection and affinity for certain pictures (Goldberg’s wife and Bruce Springsteen) and revulsion at other pictures (Osama bin Laden).

This technology is attractive to corporations wanting to know how to stimulate your urge to buy their products and to see how you react to their advertising. However, do you want companies to know this much about you? Current law holds that if you have no reasonable expectation of privacy, then you cannot stop anyone from harvesting information from you. For example, when you are out on the public roads or when you walk up to an Automated Teller Machine at the bank, you are subjecting your appearance, your facial expressions and even your body itself, to scrutiny, photography, recordation and information capture by other people (or the bank) who share your public space.

If your appearance, your voice, and even your DNA is available to everyone in public (many US courts allow police to collect a suspect’s DNA in public places without a warrant), then why would this rule not extend to your brain spray when you enter the public area at a time that mobile fMRI technology or other brain signal capture technology is commercially available? Exposing your brain signals in public may be no different from exposing your face or your voice at the same time. Why would you have a reasonable expectation of privacy in your brain spray when you know it can be read by anyone with the right equipment? Many will argue that once your body is in a public space, then it can be read by the government or business in any way that they are able.

If there were limits to the use of this technology to read your exposed brain signals, situational rules would have to be developed. For example, when fMRI technology is cost-effective and practical to use from a distance, should you automatically submit to brain scanning just by walking into a certain store, casino, bank or government building? Will companies provide notice before scanning you? Will the scan data be linked to your credit card purchases to identify you, linked to the uniform identifier in your smartphone, or linked to RFID tags in the products you buy?

This technology also has national security applications for interpreting malice in sensitive situations. The government may be able to read a suspect’s brain activity to identify intent to act before the crime takes place, scanning banks and airports for signs of potentially criminal intent. But our criminal law is based on punishment for actions, not thoughts or intentions. Everyone has intemperate thoughts of anger, frustration and fantasies of outrageous exploits, but people manage to keep those ideas in their heads and not act on them. How much do you want the government to know about your unfiltered thoughts, once those thoughts can be read from outside your head?

Once the technology is widely available, anyone could use its invasive and interpretive powers. Employers may examine their workers for hostile thoughts toward management or sympathetic thoughts toward labour organisers. Colleges can probe their applicants’ level of enthusiasm for learning. The military could test for signs of homosexuality in recruits without asking or telling. Lawyers and investigators in divorce cases would have a new avenue to examine unfaithful behaviour. How quickly would enthusiastic opposition dig up the thought crimes of political candidates?

Our laws are inadequate for addressing these issues or protecting the privacy of our brain spray. Current privacy law in the United States would not prohibit harvesting brain spray and would not even require an individual’s permission to do so. The current American privacy laws relating to reading your biometric measurements and physical condition only apply to body signs taken for health care purposes.

If a hospital records your blood type or your DNA to test for disease, those records are private and you have the right to keep them from being used for other purposes. However, a reading of your body, including your DNA and your brain spray, is not protected from transmission or sale between companies if the reading was taken for security, marketing or intelligence purposes. The recorded thoughts showing your excitement at the perfect little black dress or those used to power your prosthetic arm may be transferred to anyone. The law leaves you vulnerable.

Brain spray is the ultimate prize in the larger security and privacy debate concerning what personal facts may be captured by commercial or governmental interests. Why bother asking you what you think about a politician or a product when a company can read it directly from your brain? Without legal change, finding out who really loves “mom”, apple pie and America could soon be as simple as a head examination.

Originally published March 22, 2013 in the International edition of Intellectual Property Magazine Online

Article By:

 of

China’s First-Ever National Standard on Data Privacy – Best Practices for Companies in China on Managing Data Privacy

Sheppard Mullin 2012

Companies doing business in China should take careful notice that China is now paying more attention to personal data privacy collection. This would be an opportune time for private companies to internally review existing data collection and management practices, as well as determine whether these fall within the new guidelines, and where necessary, develop and incorporate new internal data privacy practices.

The Information Security Technology-Guide for Personal Information Protection within Public and Commercial Systems (“Guidelines”), China’s first-ever national standard for personal data privacy protection, came into effect on February 1, 2013. The Guidelines, while not legally binding, are just what they purport to be – guidelines – some commentators view these as technical guidelines. However, the Guidelines should not be taken lightly as this may be a pre-cursor of new legislation ahead. China is not quite ready to issue new binding legislation, but there are indications it seeks to develop consistency with other internationally accepted practices, especially following recent data legislation enacted in the region by neighboring Hong Kong and other Asian countries.

What should companies look for when examining existing data privacy and collection policy and practices? As the Guidelines provide for rules on collecting, handling, transferring and deleting personal information, these areas of a company’s current policies should be reviewed.

“Personal Information”

What personal information is subject to the Guidelines? The Guidelines define “personal information” as “computer data that may be processed by an information system, relevant to a certain natural person, and that may be used solely or along with other information to identify such natural person.”

“General” and “Sensitive” Personal Information

The Guidelines makes a distinction on handling “general” as opposed to “sensitive” personal information. Sensitive personal information is defined as “information the leakage of which will cause adverse consequences to the subject individual” e.g. information such as an individual’s identity card, religious views or fingerprints.

Consent Required

If an individual’s personal information is being collected, that individual should be informed as to the purpose and the scope of the data being collected; tacit consent must be obtained- the individual does not object after being well informed. With “sensitive” personal information being collected, a higher level of consent must be obtained prior to collection and use; the individual must provide express consent and such evidence be retained.

Notice

Best practices dictate a well-informed notice be given the individual prior to collection of any personal information. The notice should clearly spell out, among other items, what information is being collected, the purpose for which the information will be used, the method of collection, party to whom the personal information will be disclosed and retention period.

Cross Border Transfer

The Guidelines further limit the transfer of personal information to any organization outside of P.R. China except where the individual provides consent, the government authorizes the transfer or the transfer is required by law. It is unclear as to which law applies where transfer is “required by law”- PRC law or law of any other country.

Notification of Breach

There is a notification requirement. The individual must be notified if personal information is lost, altered or divulged. If the breach incident is material, then the “personal information protection administration authority.” The Guidelines, however, do not define or make clear this administration authority is here.

Retention and Deletion

Best practices for a company is to minimize the amount of personal information collected. Personal information once used to achieve their intended purpose should not be stored and maintained, but immediately deleted.

The Guidelines may not be binding authority, but at a minimum sets certain standards for the collection, transfer and management of personal information. Especially for companies operating in China, the Guidelines is a call to action, and for implementation of best practices relating to data privacy. Companies should take this opportunity to assess their data privacy and security policies, review and revise customer information intake procedures and documentation, and develop and implement clear, company-wide internal data privacy policies and methods.

Article By:

 of

Social Media & Emerging Employer Issues: Are You Protected?

McBrayer NEW logo 1-10-13

On June 13, 2013, Business First of Louisville and McBrayer hosted the second annual Social Media Seminar. The seminar’s precedent, Social Media: Strategy and Implementation, was offered in 2012 and was hugely successful. This year’s proved to be no different. Presented by Amy D. Cubbage and Cynthia L. Effinger, the seminar focused on emerging social media issues for employers. If you missed it, you missed out! But don’t worry, a seminar recap is below and for a copy of the PowerPoint slides click here.

McBrayer: If a business has been designated an entity that must comply with HIPAA, what is the risk of employees using social media?

Cubbage: Employers are generally liable for the acts of their employees which are inconsistent with HIPAA data privacy and security rules. As employees’ use of social networking sites increase, so does the possibility of a privacy or security breach. An employee may be violating HIPAA laws simply by posting something about their workday that is seemingly innocent. For instance, a nurse’s Facebook status that says, “Long day, been dealing with a cranky old man just admitted into the ER” could be considered a HIPAA violation and expose an employer to sanctions and fines.

 

McBrayer: Should businesses avoid using social media so that they will not become the target of social media defamation?

Effinger: In this day and age it is hard, if not impossible, for a business to be successful without some use of social media. There is always the risk that someone will make negative comments about an individual or a business online, especially when anonymity is an option. Employers need to know the difference between negativity and true defamation. Negative comments or reviews are allowed, perhaps even encouraged, on some websites. If a statement is truly defamatory, however, then a business should make efforts to have the commentary reported and removed. The first step should always be to ask the internet service provider for a retraction of the comment, but legal action may sometimes be required.

 

McBrayer: When does a negative statement cross the line and become defamation?

Effinger: It is not always easy to tell. First, a statement must be false. If it is true, no matter how damaging, it is not defamation. The same goes for personal opinions. Second, the statement must cause some kind of injury to an individual or business, such as by negatively impacting a business’s sales, to be defamation.

 

McBrayer: Can employers ever prevent employees from “speaking” on social media?

Effinger: Employers should always have social media policies in place that employees read, sign, and abide by. While it is never really possible to prevent employees from saying what they wish on social media sites, some of their speech may not be protected by the First Amendment’s freedom of speech clause.

 

McBrayer: What constitutes “speech” on the internet? Is “liking” a group on Facebook speech? How about posting a YouTube video?

Effinger: This is a problem that courts and governmental employment agencies, like the National Labor Relations Board, are just starting to encounter. There is no bright-line rule for what constitutes “speech,” but it is safe to say that anything an employee does online that is somehow communicated to others (even “liking” a group or posting a video) qualifies.

 

McBrayer: Since a private employer is not bound by the First Amendment, can they terminate employees for social media actions with no repercussions?

Effinger: No! In fact, it could be argued that private employees are afforded more protection for what they say online than public employees. While a private employer has no constitutional duty to allow free speech, the employer is subject to state and federal laws that may prevent them from disciplining an employee’s conduct. As a general rule, private employees have the right to communicate in a “concerted manner” with respect to “terms and conditions” of their employment. Such communication is protected regardless of whether it occurs around the water cooler or, let’s say, on Twitter.

 

McBrayer: It seems like the best policy would be for employers to prohibit employees from discussing the company in any negative manner. Is this acceptable?

Effinger: It is crucial for companies to have social media policies and procedures, but crafting them appropriately can be tricky. There have been several instances where the National Labor Relations Board has reviewed a company’s policy and found its overly broad restrictions or blanket prohibitions illegal. Even giant corporations like General Motors and Target have come under scrutiny for their social media policies and been urged to rewrite them so employees are given more leeway.

 

McBrayer: Is social media a company asset?

Cubbage: Yes! Take a moment to consider all of the “followers”, “fans”, or “connections” that your business may have through its social media accounts. These accounts provide a way to constantly interact with and engage clients and customers. Courts have recently dealt with cases where a company has filed suit after a rogue employee stole a business account in some manner, for instance by refusing to turn over an account password. Accounts are “assets,” even if not tangible property.

 

McBrayer: What is the best way for an employer to protect their social media accounts?

Cubbage: Social media accounts should first be addressed in a company’s operating agreement. Who gets the accounts in the event the company splits? There are additional steps every employer should take, such as including a provision in social media policies that all accounts are property of the business. Also, there should always be more than one person with account information, but never more than a few. Treat social media passwords like any other confidential business information – they should only be distributed on a “need to know” basis.

Article By:

 of

 

The “Reasonable” Perils of Data Security Law

Your House Counsel Logo

The following is drawn from the materials to be presented at the 17th Annual America’s Claims Event 2013 conference in the “Cyber-Liability and Data Loss Claims: A Case Study from Notice of Occurrence Through Conclusion” session on June 20, 2013 in Austin, Texas.

NEGLIGENCE. “The omission to do something which a reasonable man, guided by those ordinary considerations which ordinarily regulate human affairs, would do, or the doing of something which a reasonable and prudent man would not do.”1

“When we think about data breaches, we often worry about malicious minded computer hackers exploiting software flaws, or perhaps Internet criminals seeking to enrich themselves at our expense. But the truth is that errors and negligence within the workplace are a significant cause of data breaches that compromise sensitive personal information.”2

According to a recent privacy institute study by the Ponemon Institute, only 8% of the surveyed data breach incidents were due to external cyber attack, while 22% could be attributed in part to malicious employees or other insiders. Loss of laptops or other mobile devices containing sensitive data topped the survey, while mishandling of data “at rest” or “in motion” were also major contributors.3 A later study showed that 39% of surveyed organizations identified negligence as the root cause of their data breaches, while 37% were attributed to malicious or criminal attack.4

Negligent document disposal is a clear source of preventable negligence. On December 7, 2012, at least eight garbage bags were left unattended on a dirt road in Hudson, Florida, containing credit applications to Rock Bottom Auto Sales with names, driver’s license information, and Social Security numbers. Three days later, in Pittsburgh, Pennsylvania, job placement documents were found in a dumpster from the West Pittsburgh Partnership, all containing names and SSN’s.5 For that matter, the Internal Revenue Service in 2008 was found to have disposed of taxpayer documents in regular waste containers and dumpsters, and that a follow-up investigation revealed that IRS officials failed to consistently verify whether contract employees who have access to taxpayer documents had passed background checks.6

Convincing users to back up their laptops has been difficult enough in practice; getting them to encrypt them voluntarily is much more daunting a task. A 2010 Ponemon Institute study, admittedly biased towards large corporations, concluded that of those surveyed typically 46% of the laptops held confidential data, while only 30% had their contents encrypted. A startlingly low 29% of the laptops had backup/imaging software installed, which implies that more than two thirds of all laptops if lost or stolen would leave no backup of work in progress.7

Even though more devices are coming to market with built-in encryption capabilities, these features may simply be left switched off by their users despite the fact that lost laptops, tablets, smartphones, USB “thumb” drives and other portable devices with unencrypted contents continue to provide a wealth of information to identity thieves.

On March 22, 2013, a laptop used by clinicians at the University of Mississippi Medical Center was discovered to be missing. It contained patient names, social security numbers, addresses, diagnoses, birthdates and other personal information, protected only by a password.8

On January 8, 2013, an unencrypted flash drive was stolen from a Hephzibah Georgia middle school teacher’s car, containing student SSN’s and other information.9 TD Bank had two unencrypted backup tapes with customer and their dependent names, SSN’s, addresses, account, credit and debit card numbers go missing while being transported between two TD Bank offices in March 2012, but public notice was not made until March 4, 2013.10

An examination of reported data security incidents with potential or actual data privacy breaches reveals that the scope of what is deemed “reasonable” ranges from ordinary care in the disposal of documents containing personally identifiable information (“PII”) and personal health information (“PHI”), to sophisticated data encryption, access authentication and other highly technical data security practices that the “reasonably prudent” persons, companies and governmental agencies are now expected to employ to protect the personal data that they have collected.

On October 10, 2012, the South Carolina Department of Revenue was informed of a potential cyber attack involving the personal information of taxpayers.11 The origin of the attack was traced to a state Department of Revenue employee who clicked on an embedded link in a “salacious” email and compromised his computer.12 The subsequent investigation revealed that “outdated computers and security flaws at the state’s Department of Revenue allowed international hackers to steal 3.8 million tax records”, according to Governor Nikki R. Haley. Apparently South Carolina did not encrypt Social Security Numbers, and once the outer perimeter security was compromised the hackers were able to log in as tax officials and read the data.13

Users of online services will routinely provide personal information as a matter of course to shop or obtain other services, all of which gets recorded and tracked. Data privacy laws are intended to promote and enforce a number of fair information practices to give individuals the ability to find out what personal information is being kept and by whom, opportunities to correct or remove such information, assurances that reasonable measures will be undertaken to protect such information from disclosure and to properly dispose of such information when appropriate, and may include remedial measures to be undertaken in the event of a data breach.

In the United States, there is no single comprehensive statute for data privacy laws.14 Instead, a number of sector-specific federal laws have been enacted to address the particular sensitivity of information generally recorded by companies in that market sector, and forty six states have enacted data breach notification statutes. If there is a data breach, you may be liable under state law to provide notice to those affected.15 In some jurisdictions, you may be required to provide notice to all consumer credit reporting agencies as well.16

The financial exposure to a data breach by a company may be insurable to some degree using various forms of “cyber liability” insurance, which expand and supplement many forms of more standard insurance coverages underwritten today. Policy premiums for such policies, however, are dependent upon the extent of data security practices implemented.

Conducting a data security risk assessment before encountering a data breach should identify measures that can be taken at the corporate level to provide additional protection not only to sensitive data, but also mitigate the consequences of a security incident where company data is disclosed, lost or stolen. Encrypted data in many cases may not be considered “exposed” for purposes of mandated notice to affected individuals.

In the event of a data security incident, please consider obtaining a data forensic team to not only identify the source and extent of the breach, but to preserve evidence in the event that a potential prosecution may be possible.

We will discuss a data breach case study from inception through enforcement, resolution and potential mitigation through cyber liability insurance at our presentation at ACE 2013. We hope to see you then.


1 BLACK’S LAW DICTIONARY 1184 (4th ed. 1968).

2 Privacy Rights Clearinghouse, Are the Businesses You Frequent or Work For Exposing You to an Identity Thief?, (Mar. 6, 2012), https://www.privacyrights.org/workplace-identity-theft-quiz-alert-2012

3 The Human Factor in Data Protection, 3 PONEMON INSTITUTE LLC (January 2012), available athttp://www.ponemon.org/local/upload/file/The_Human_Factor_in_data_Protection_WP_FINAL.pdf.

4 2011 Cost of Data Breach Study: United States, 7 PONEMON INSTITUTE LLC (March 2012),available at http://   www.ponemon.org/local/upload/file/2011_US_CODB_FINAL_5.pdf.

5 http://www.privacyrights.org/data-breach/new (check Breach Type “PHYS”, Organization Type “BSR” and Year “2012”).

6 Increased Management Oversight of the Sensitive but Unclassified Waste Disposal Process Is Needed to Prevent Inadvertent Disclosure of Personally Identifiable Information, TREASUR INSPECTOR GENERAL FOR TAX ADMINISTRATION (May 8, 2009), http://www.treas.gov/tigta/auditreports/2009reports/200930059fr.pdf.

7 The Billion Dollar Lost Laptop Problem 6 PONEMON INSTITUTE LLC (Sept. 30, 2010), availableat http://newsroom.intel.com/servlet/JiveServlet/download/1544-8-3132/The_Billion_Dollar_Lost_Laptop_Study.pdf.

8 http://www.privacyrights.org/data-breach/new (check Breach Type “PORT”, Organization Type “EDU” and Year “2013”).

9 http://www.privacyrights.org/data-breach/new (check Breach Type “PORT”, Organization Type “EDU” and Year “2013”).

10 http://www.privacyrights.org/data-breach/new (check Breach Type “PORT”, Organization Type “BSF” and Year “2013”).

11 Kara Durrette, SC Department of Revenue hacked; millions of SC residents affected, http://www.midlandsconnect.com/sports/story.aspx?id=817902#.UVyOdheYu7w (posted Oct. 26, 2012, updated Oct. 27, 2012).

12 Matthew J. Schwartz, How South Carolina Failed To Spot Hack Attack, INFORMATION WEEK, Nov. 26, 2012, http://www.informationweek.com/security/attacks/how-south-carolina-failed-to-spot-hack-a/240142543.

13 Robbie Brown, South Carolina Offers Details of Data Theft and Warns It Could Happen Elsewhere, N.Y. TIMES, Nov. 20, 2012, available at http://www.nytimes.com/2012/11/21/us/more-details-of-southcarolina-hacking-episode.html?_r=0.

14 PETER P. SWIRE & KENESA AHMAD, FOUNDATIONS OF INFORMATION PRIVACY AND DATA PROTECTION 41 (International Association of Privacy Professionals) (2012).

15 NYC Administrative Code § 20-117(c) (2013); NY CLS State Technology Law § 208(2) (NY state residents only); 73 Pa. Stat. § 2303 (PA residents).

16 73 Pa. Stat. § 2305; NY CLS State Technology Law §208(7)(b).

Article By:

of

Brace for Impact – Final HITECH Rules Will Require Substantially More Breach Reporting

The National Law Review recently published an article, Brace for Impact – Final HITECH Rules Will Require Substantially More Breach Reporting, written by Elizabeth H. Johnson with Poyner Spruill LLP:

Poyner Spruill

 

The U.S. Department of Health and Human Services (HHS) has finally issued its omnibus HITECH Rules.  Our firm will issue a comprehensive summary of the rules shortly (sign up here), but of immediate import is the change to the breach reporting harm threshold.  The modification will make it much more difficult for covered entities and business associates to justify a decision not to notify when an incident occurs.

Under the interim rule, which remains in effect until September 23, 2013, a breach must be reported if it “poses a significant risk of financial, reputational, or other harm to the individual.” The final rule, released yesterday, eliminates that threshold and instead states:

“[A]n acquisition, access, use, or disclosure of protected health information in a manner not permitted under subpart E [the Privacy Rule] is presumed to be a breach unless the covered entity or business associate, as applicable, demonstrates that there is a low probability that the protected health information has been compromised based on a risk assessment of at least the following factors:

(i) The nature and extent of the protected health information involved, including the types of identifiers and the likelihood of re-identification;

(ii) The unauthorized person who used the protected health information or to whom the disclosure was made;

(iii) Whether the protected health information was actually acquired or viewed; and

(iv) The extent to which the risk to the protected health information has been mitigated.”
(Emphasis added).

In other words, if a use or disclosure of information is not permitted by the Privacy Rule (and is not subject to one of only three very narrow exceptions), that use or disclosure will be presumed to be a breach.  Breaches must be reported to affected individuals, HHS and, in some cases, the media.  To rebut the presumption that the incident constitutes a reportable breach, covered entities and business associates must conduct the above-described risk analysis and demonstrate that there is only a low probability the data will be compromised.  If the probability is higher, breach notification is required regardless of whether harm to the individuals affected is likely.  (Interestingly, this analysis means that if there is a low probability of compromise notice may not be required even if the potential harm is very high.)

What is the effect of this change?  First, there will be many more breaches reported resulting in even greater costs and churn than the already staggering figures published by Ponemon which reports that 96% of health care entities have experienced a breach with average annual costs of $6.5 billion since 2010.

Second, enforcement will increase.  Under the new rules, the agency is required (no discretion) to conduct compliance reviews when “a preliminary review of the facts” suggests a violation due to willful neglect.  Any reported breach that suggests willful neglect would then appear to require agency follow-up.  And it is of course free to investigate any breach reported to them.  HHS reports that it already receives an average of 19,000 notifications per year under the current, more favorable breach reporting requirements, so where will it find the time and money to engage in all these reviews?  Well, the agency’s increased fining authority, up to an annual maximum of $1.5 million per type of violation, ought to be some help.

Third, covered entities and business associates can expect to spend a lot of time performing risk analyses.  Every single incident that violates the Privacy Rule and does not fit into one of three narrow exceptions must be the subject of a risk analysis in order to defeat the presumption that it is a reportable breach.  The agency requires that those risk analyses be documented, and they must include at least the factors listed above.

So why did the agency change the reporting standard?  As it says in the rule issuance, “We recognize that some persons may have interpreted the risk of harm standard in the interim final rule as setting a much higher threshold for breach notification than we intended to set. As a result, we have clarified our position that breach notification is necessary in all situations except those in which the covered entity or business associate, as applicable, demonstrates that there is a low probability that the protected health information has been compromised. . . .”

The agency may also have changed the standard because it was criticized for having initially included a harm threshold in the rule, with critics claiming that the HITECH Act did not provide the authority to insert such a standard.  Although the new standard does, in essence, permit covered entities and business associates to engage in a risk-based analysis to determine whether notice is required, the agency takes the position that the new standard is not a “harm threshold.”  As they put it, “[W]e have removed the harm standard and modified the risk assessment to focus more objectively on the risk that the protected health information has been compromised.”  So, the agency got their way in that they will not have to receive notice of every single event that violates the Privacy Rule and they have made a passable argument to satisfy critics that the “harm threshold” was removed.

The new rules are effective March 26, 2013 with a compliance deadline of September 23, 2013.  Until then, the current breach notification rule with its “significant risk of harm” threshold is in effect.  To prepare for compliance with this new rule, covered entities and business associates need to do the following:

  • Create a risk analysis procedure to facilitate the types of analyses HHS now requires and prepare to apply it in virtually every situation where a use or disclosure of PHI violates the Privacy Rule.
  • Revisit security incident response and breach notification procedures and modify them to adjust notification standards and the need to conduct the risk analysis.
  • Revisit contracts with business associates and subcontractors to ensure that they are reporting appropriate incidents (the definition of a “breach” has now changed and may no longer be correct in your contracts, among other things).
  • If you have not already, consider strong breach mitigation, cost coverage, and indemnification provisions in those contracts.
  • Revisit your data security and breach insurance policies to evaluate coverage, or lack thereof, if applicable.
  • Consider strengthening and reissuing training.  With every Privacy Rule violation now a potentially reportable breach, it’s more important than ever to avoid mistakes by your workforce.  And if they happen anyway, during a subsequent compliance review, it will be important to be able to show that your staff was appropriately trained.
  • Update your policies to address in full these new HIPAA rules.  The rules require it, and it will improve your compliance posture if HHS does conduct a review following a reported breach.

As noted above, our firm will issue a more comprehensive summary of these new HIPAA rules in coming days.

© 2013 Poyner Spruill LLP

Theft of Employee Data from Third-Party Vendor Exposes Employer and Vendor to Privacy Class Action

The National Law Review recently published an article by Kevin M. McGinty of Mintz, Levin, Cohn, Ferris, Glovsky and Popeo, P.C. regarding Employee Data Theft:

A recently-filed class action lawsuit asserts claims against the Winn-Dixie supermarket chain and a third-party vendor, Purchasing Power, LLC, in connection with the alleged theft of employee data provided to Purchasing Power in order to administer a discount purchasing program offered to Winn-Dixie employees.  The claims advanced against Winn-Dixie and Purchasing Power highlight the potential risks associated with sharing employee or customer data with third party vendors, and underscore the need for companies to ensure that the data security practices of third-party vendors are consistent with those of the companies themselves.  The complaint also demonstrates how failure to make prompt disclosure of data breaches to affected individuals can increase the risk of class action litigation.

According to the complaint in Burrows v. Purchasing Power, LLC, Case No. 1:12-cv-22800 (S.D. Fla.), Winn-Dixie either transferred or permitted Purchasing Power to access personally identifiable information (“PII”) of Winn-Dixie employees for the purpose of making a discount purchasing program available to Winn-Dixie’s employees.  The complaint alleges that Winn-Dixie notified employees on January 27, 2012 that Winn-Dixie employee data had been inappropriately accessed by an employee of Purchasing Power.  The notice further stated that Winn-Dixie first learned of the data theft in October 2011.  According to the complaint, Winn-Dixie did not explain the reason for its delay in providing notice, and Purchasing Power has never, at any time, provided notice of the breach to Winn-Dixie employees.

One unique aspect of Burrows that distinguishes it from the typical privacy class action is an allegation that the named plaintiff suffered actual injury by reason of a data breach.  Specifically, plaintiff alleges that the Internal Revenue Service refused to accept his 2011 federal income tax return, stating that a return had already been filed in his name.  Plaintiff claims that someone who had access to the PII stolen from Purchasing Power filed the return, thereby depriving plaintiff of an anticipated refund.  He seeks damages associated with the lost refund, in addition to other damages associated with the risk of further misuse of his PII.

The complaint asserts claims for negligence, violation of the federal Stored Communications Act, 18 U.S.C. § 2702, violation of the Florida Unfair and Deceptive Trade Practices Act, and breach of the common law right to privacy.  Plaintiff asserts these claims on behalf of a putative class of all Florida employees of Winn-Dixie whose PII was provided to or accessed by Purchasing Power.

The complaint in Burrows has some evident flaws.  The Stored Communications Act only applies to conduct by entities such as Internet service providers that are engaged in the “provision to the public of computer storage or processing services by means of an electronic communications system.”  18 U.S.C. § 2711(2).  Neither the defendants nor the conduct alleged facially meet this requirement.  Further, the particularized harm allegedly suffered by the named plaintiff allows defendants to argue that determining whether class members suffered actual injury would raise highly individualized questions of fact that preclude certification of a plaintiff class to seek money damages under Fed. R. Civ. P. 23(b)(3).

Nonetheless, certain aspects of Burrows pose challenges for the defendants.  Where, as here, the data breach allegedly resulted from a targeted effort to steal PII – unlike cases involving thefts of laptops, in which any data theft is incidental – courts have been more receptive to claims that class members’ costs to mitigate risk of identity theft constitute cognizable injury.  The actual injury allegedly suffered by the named plaintiff supports the argument that the threat of misuse of the stolen data is not speculative and, therefore, warrants monetary and injunctive relief.

Burrows provides a timely reminder that it is critical that any company that shares customer or employee PII with a vendor must ensure that the vendor can adequately protect such data.  Executing a written agreement specifying the company’s and the vendor’s respective data security obligations is a necessary, but not sufficient step.  The contract will not be worth the paper on which it is written if the vendor lacks the capability to comply with its obligations.  Individuals responsible for the company’s data security practices must engage in sufficient due diligence to assure the company that the vendor’s data security practices are at least commensurate with the company’s practices and otherwise comply with the legal requirements of all applicable states and jurisdictions.  In addition, to provide proper incentives to adhere to contract requirements, the agreement should indemnify the company for any losses caused by the vendor’s failure to satisfy its data security obligations.

Finally, Burrows illustrates the critical importance of prompt notification whenever a data breach occurs.  If plaintiff was indeed victimized by someone who filed a bogus return using the plaintiff’s stolen PII, notice to employees in October 2011, perhaps combined with proactive steps to protect affected employees from misuse of data, might have forestalled such an injury.  Absent such an occurrence, it is unlikely that a lawsuit would ever have been filed.  Ultimately, providing prompt notice whenever a data breach occurs avoids violating state law notice requirements and discourages the filing of class action lawsuits.

©1994-2012 Mintz, Levin, Cohn, Ferris, Glovsky and Popeo, P.C.

White House Report May Have Long-Term Effect on Consumer Privacy and How Companies Do Business

A recent White House report on consumer  data privacy forecasts a multifaceted approach to fulfilling public expectations regarding the protection of consumer’s personal information.  Although it is uncertain if the report will result in new legislation in the near future, the report could have long-term implications for the current regulatory landscape.

In February 2012 the White House released a report detailing the current administration’s position on consumer privacy, entitled Consumer Data Privacy in a Networked World: A Framework for Protecting Privacy and Promoting Innovation in the Global Digital Economy.  Although it is uncertain if the report will result in new privacy legislation in the near term, the report may still have long-term implications for the current regulatory landscape.

As explained in the report’s Executive Summary, the consumer privacy framework proposed by the administration consists of four key elements: (1) a Consumer Privacy Bill of Rights; (2) a “multistakeholder” process to specify how the principles in the Consumer Privacy Bill of Rights apply in particular business  contexts; (3) effective enforcement; and (4) a commitment to increase interoperability with the privacy frameworks of international partners. Below we examine each of these elements.

1. Consumer Privacy Bill of Rights

Building upon Fair Information Practice Principles that were first promulgated by the U.S. Department of Health, Education, and Welfare in the 1970s, the Consumer Privacy Bill of Rights is intended to affirm consumer expectations with regard to how companies handle personal data.2  Although the administration recognizes consumers have “certain responsibilities” to protect their own privacy, it also emphasizes the importance of using personal data in a manner consistent with the context in which it is collected.

In a press release accompanying the release of the report, the White House summarized the basic tenets of the Consumer Privacy Bill of Rights3:

Transparency—Consumers have a right to easily understandable information about privacy and security practices.

Respect for Context—Consumers have a right to expect that organizations will collect, use and disclose personal data in ways that are consistent with the context in which consumers provide the data.4

Security—Consumers have a right to secure and responsible handling of personal data.

Access and Accuracy—Consumers have a right to access and correct personal data in usable formats, in a manner that is appropriate to the sensitivity of the data and the risk of adverse consequences to consumers if the data are inaccurate.

Focused Collection—Consumers have a right to reasonable limits on the personal data that companies collect and retain.

Accountability—Consumers have a right to have personal data handled by companies with appropriate measures in place to assure they adhere to the Consumer Privacy Bill of Rights.

The outline for the Consumer Privacy Bill of Rights is largely aspirational, in that it does not create any enforceable obligations.  Instead, the framework simply creates suggested guidelines for companies that collect personal data as a primary, or even ancillary, function of their business operations.  As the administration recognizes, in the absence of legislation these are only “general principles that afford companies discretion in how they implement them.”5

Nevertheless, as consumers become more invested in how their personal information is used, a company that disregards the basic tenets of the Consumer Privacy Bill of Rights may be doing so at its own peril.  Although the Consumer Privacy Bill of Rights has not been codified, companies should expect that some iteration of the same principles will ultimately be legislated, or voluntarily adopted by enough industry leaders to render them enforceable by the FTC.  Therefore, companies would be welladvised to make sure they have coherent privacy policies in place now in order to avoid running afoul of guidelines imposed by whatever regulatory framework is implemented later.

2. The “Multistakeholder” Process to Develop Enforceable Codes of Conduct

The report also encourages stakeholders—described by the Administration as “companies, industry groups, privacy advocates, consumer groups, crime victims, academics, international partners, State Attorneys General, Federal civil and criminal law enforcement representatives, and other relevant groups”—to cooperate in the development of rules implementing the principles outlined in the Consumer Privacy Bill of Rights.  Of all the elements comprising the administration’s consumer privacy framework, it is this “multistakeholder” process that will likely see the most activity in coming months.

The report identifies several benefits attributable to this approach6:  First, an open process reflects the character of the internet itself as an “open, decentralized, user-driven platform for communication, innovation and economic growth.”  Second, participation of multiple stakeholders encourages flexibility, speed and creativity.  Third, this approach is likely to producesolutions “in a more timely fashion than regulatory processes and treaty-based organizations.”  Finally, the multistakeholder process allows experts to focus on specific challenges, rather than relying upon centralized authority.

The report contemplates that the multistakeholder process  will be moderated by the U.S. Department of Commerce’s National Telecommunications and Information Administration (NTIA), a view echoed by the press release accompanying the report.7  This process will likely present companies whose operations involve the collection of consumer data online—a rapidly expanding category that encompasses far more than just internet businesses—with an opportunity to shape future internet privacy legislation.

NTIA has already initiated the conversation through the issuance of a Request for Public Comments on the administration’s consumer privacy framework.8  NTIA has suggested the first topic for discussion should be a “discrete issue that allows consumers and businesses to engage [in] and conclude multistakeholder discussions in a reasonable timeframe.”9    As  one example, NTIA has suggested stakeholders discuss how the  Consumer Privacy Bill of Rights’ “transparency” principle should be applied to privacy notices for mobile applications.  When one considers that by some estimates the revenue generated by the mobile application market is expected to reach $25 billion over the next four years, it is clear that even this “discrete” issue alone could result in a significant regulatory impact.10

3. Effective Enforcement

The report further suggests that the Federal Trade Commission (FTC) will play a vital role in the enforcement of the consumer privacy protections outlined by the administration and developed during the multistakeholder process.  The administration admits, however, that in the absence of new legislation, the FTC’s authority in the area of consumer privacy may be limited to the enforcement of guidelines adopted by companies voluntarily.

According to the administration, enforcement actions “by the FTC (and State Attorneys General) have established that companies’ failures to adhere to voluntary privacy commitments, such as those stated in privacy policies, are actionable under the FTC Act’s (and State analogues) prohibition on unfair or deceptive acts or practices.”11  Therefore, in the administration’s view, the guidelines developed during the multistakeholder process would be enforceable under the existing statutory framework.

In light of the current election cycle and the resulting political landscape, it seems unlikely Congress will pass new consumer privacy legislation in the near term.  Nevertheless, companies should remain mindful that the FTC—and even state Attorneys General—may become more aggressive in addressing flagrant violations of consumers’ privacy expectations.  For instance, California’s Attorney General has explained that her office intends to enforce an agreement that California reached with Apple and other industry leaders earlier this year.  The agreement would require developers of mobile applications to post conspicuous privacy policies that explain how users’ personal information is gathered and used.

Moreover, the increased attention directed at privacy issues by consumer groups and the public at large suggests an inevitable groundswell of support for new privacy legislation.  As Jon Leibowitz, the chairman of the FTC, explained earlier this week, we could see new privacy legislation early in the term of the next Congress.12

4. A Commitment to Increased Operability

Recognizing that other countries have taken different approaches to data privacy issues, the report also encourages the development of interoperability with regulatory regimes implemented internationally.  The administration has suggested a three-pronged approach to achieving increased operability: mutual recognition, development of codes of conduct through multistakeholder processes and enforcement cooperation.

With respect to mutual recognition, the report identifies existing examples of transnational cooperation in the privacy context.  For example, it cites the Asia-Pacific Economic Cooperation’s voluntary system of Cross Border Privacy Rules and also the European Union’s Data Protection Directive.  It appears that the administration, at least for now, will depend upon companies’ voluntary adoption of these international frameworks.

Just as the administration will rely upon the multistakeholder process to develop domestic codes of conduct, it will adopt the same approach to developing globally applicable rules and guidelines.  Although the administration contemplates this process will be directed by the U.S. Departments of Commerce and State, the report does not provide any details.

Finally, the report explains the FTC will spearhead the U. S. Government’s efforts to cooperate with the FTC’s foreign counterparts in the “development of privacy enforcement priorities, sharing of best practices, and support for joint enforcement initiatives.”13


1  Report at 1. 

2  Although businesses are also “consumers,” the report appears to focus on protecting individuals’ personally identifiable information. 

3  We Can’t Wait: Obama Administration Unveils Blueprint for a “Privacy Bill of Rights” to Protect Consumers Online, February 23, 2012, Office of the Press Secretary. 

4 To illustrate the “context” principle, the report provides the example of a hypothetical social networking provider.  Users expect that certain biographical information will be collected in order to improve the service; however, if the provider sells the same biographical information to an information broker for advertising purposes, that use is more attenuated from users’ expectations.  Therefore, the latter use is not consistent with the “context” in which the biographical information was provided. 

5  Report at 2. 

6  Report at 23. 

7  We Can’t Wait, February 23, 2012, Office of the Press Secretary (“In the coming weeks, the Commerce Department’s National Telecommunications and Information Administration will convene stakeholders … .”). 

8  Docket No. 120214135-2135-01, February 29, 2012. 

9 Moving Forward with the Consumer Privacy Bill of Rights, Lawrence E. Strickling, Assistant Secretary for Communications and Information, February 29, 2012. 

10 According to Markets & Markets, a market research company and consulting firm. 

11 Report at 29. 

12 U.S. Agency Seeks Tougher Consumer Privacy Rules, The New York Times, March 26, 2012. 

13 Report at 33. 

© 2012 McDermott Will & Emery

Privacy-on-the-Go: California Attorney General and Major Mobile Application Platforms Agree to Privacy Principles for Mobile Applications

Recently The National Law Review featured an article written by Cynthia J. Larose and Jake Romero of Mintz, Levin, Cohn, Ferris, Glovsky and Popeo, P.C. regarding Mobile Apps and Privacy:

Application developers have been put on notice by the State of California. It is time to pay attention to user privacy and collection of information from user devices.

In an effort led by the office of California Attorney General Kamala D. Harris, the state has reached an agreement committing the six largest companies offering platforms for mobile applications (commonly referred to as “apps”) to a set of principles designed to ensure compliance with California’s Online Privacy Protection Act. The agreement with Apple Inc., Google Inc., Microsoft Corp., Amazon.com Inc., Hewlett-Packard Co., and Research In Motion Ltd., who collectively represent over 95% of the mobile application market, is significant for two reasons. First, it operates as an acknowledgement that California’s Online Privacy Protection Act applies to app developers as well as platform providers. Second, the agreement may effectively create a minimum standard for disclosures and transparency with regard to the collection of personal information by mobile applications. Because of the global nature of the Internet, the law will apply to every mobile app provided through the six firms’ app stores even though it is a state law.

This alert includes a description of the principles underlying this agreement, as well as certain best practices to help mobile app developers ensure compliance. The full text of the agreement, as well as comments from the Office of the Attorney General, can be accessed online at http://ag.ca.gov/newsalerts/print_release.php?id=2630.

Mobile Applications and Data Privacy

The most recent data from the Pew Research Center shows that 50% of all adult cell phone owners have apps on their mobile phones, a percentage that has nearly doubled over the past two years. This same survey also indicated that approximately 43% of those surveyed purchased a phone on which apps were already installed. Many of these mobile applications, in order to facilitate the functionality of the app, allow the app developer broad access to data held on the user’s mobile device. However, as noted by Attorney General Harris in a press conference announcing the agreement, many mobile applications, including twenty-two of the thirty most popular apps, lack a privacy policy to explain how much of the user’s data is accessible by the developer, and how and with whom that data is shared.

California’s Online Privacy Protection Act provides that “[a]n operator of a commercial Web site or online service that collects personally identifiable information through the Internet about individual consumers residing in California who use or visit its commercial Web site or online service shall conspicuously post its privacy policy on its Web site,” or in the case of an operator of an online service, make that policy reasonably accessible to those consumers. In entering into this agreement, the six major platform providers have acknowledged that this requirement applies equally to mobile app developers (as “online services”) and the platform providers have agreed to, among other things, implement a means for users to report apps that do not comply with this requirement and a process for investigating and responding to those reports.

The New Privacy Standard and Ensuring Compliance

A likely outcome of this agreement is that compliance with California’s Online Privacy Protection Act will become a minimum standard for the mobile application industry, because even those developers located outside the state of California will likely conclude that it is easier to have a single policy that meets California’s requirements, rather than risk inadvertent non-compliance.

To ensure compliance, developers or providers of mobile apps that collect personal data from users’ mobile devices will be required to have a privacy policy that meets the requirements set forth in Section 22575(b) of California’s Business and Professions Code (as an incorporated portion of the Online Privacy Protection Act, Section 22575(b) can be accessed in full by following the link provided above). Specifically, the privacy policy must:

·         Identify the categories of personally identifiable information that the operator collects through the Web site or online service about individual consumers who use or visit its commercial Web site or online service and the categories of third-party persons or entities with whom the operator may share that personally identifiable information.

·         If the operator maintains a process for an individual consumer who uses or visits its commercial Web site or online service to review and request changes to any of his or her personally identifiable information that is collected through the Web site or online service, provide a description of that process.

·         Describe the process by which the operator notifies consumers who use or visit its commercial Web site or online service of material changes to the operator’s privacy policy for that Web site or online service.

·         Identify its effective date.

In establishing a compliant privacy policy, an app developer or provider should take great care to ensure that the descriptions and processes contained therein match the actual operations of the company and the information it collects, and the policy should be reviewed periodically by both legal counsel and the app developer’s technical experts so that it can be updated as necessary. The policy should be clear and easy to understand, especially with regard to the collection and sharing of personal data. For those companies who may be affected by this agreement and already have a privacy policy in place, that policy should be reviewed to determine whether it should be updated. Developers and platform providers that do not comply with the law can be prosecuted under California’sUnfair Competition Law and/or False Advertising Law, which has penalties of up to $500,000 per use of the app in violation, Harris said. “If developers do not follow the privacy policies we will sue,” she added.

Anticipated Developments

Per their agreement with Attorney General Harris, the six major mobile app platforms will commence working with app developers to ensure compliance and provide education regarding privacy and data sharing. To increase awareness and promote transparency, mobile app developers will be required, as part of the application submitting an app to the platform, to provide either a link to that developer’s privacy policy, a statement describing the policy, or the full text of the policy itself. In each case, a user who is considering downloading the developer’s app will be provided access to the privacy policy associated with that app prior to downloading it.

The six major platforms have agreed to reconvene within six months to further evaluate any required changes), but no specific timeline has been stated with regard to implementing the changes described above. However, for mobile app developers who hope to continue to be a part of this quickly growing and highly lucrative market, there may not be a more opportune time to take advantage of the resources being provided on both a state and industry level.

©1994-2012 Mintz, Levin, Cohn, Ferris, Glovsky and Popeo, P.C.