Encrypted Messaging Apps Create New Data Privacy Headaches for Employers

encrypted messagingBusinesses have largely benefitted from the proliferation of mobile devices and text messaging apps that facilitate quick, round-the-clock communications. However, such technologies also make it increasingly difficult to monitor and control the unauthorized distribution of confidential data. On March 30, UK regulators fined a former managing director of Jeffries Group for divulging confidential client information. The banker, Christopher Niehaus, shared confidential information with two friends using WhatsApp, a popular text messaging app. The exposed information included the identity of a Jeffries Group client, the details of a deal involving the client, and the bank’s fee for the transaction. Perhaps the most surprising aspect of this story is that the leak was discovered at all. Because data sent on WhatsApp are encrypted and Mr. Niehaus used his personal mobile phone to send the messages, Jeffries Group only viewed the communications—and subsequently informed regulators—after Mr. Niehaus turned his device over to the bank in connection with an unrelated investigation.

Many employers use tools to monitor data sent to and from company-owned devices and e-mail accounts. However, companies cannot read messages delivered on programs offering end-to-end encryption, like WhatsApp or Apple’s iMessage, even if the information is sent on a company-owned device or network. Therefore, policies and tools intended to protect confidential information can be circumvented by employees using common texting apps. These technologies, which are typically free and easy to obtain, are causing headaches for employers across the country. For instance, recent media reports suggested that employees at the Environmental Protection Agency used Signal, an encrypted texting app, to surreptitiously strategize to undermine the current administration. According to other reports, White House press secretary Sean Spicer suspected his aides of using encrypted texting apps to leak information to the media.

Companies utilizing “bring your own device” practices face even greater risks. Even though end-to-end encryption may safeguard data from hackers, confidential information is often exposed when a device is lost or stolen. In fact, more data breaches are caused by lost devices and employee errors than third-party attacks. Employers can crack down on unauthorized communications by taking steps like disabling the installation of third-party texting apps on mobile devices. However, such measures may be extremely unpopular with employees who use their phones and tablets for both work-related and personal communications.

Given the growing popularity of encrypted texting apps, employers need to accept that they are not able to monitor each and every one of their employees’ electronic communications. Businesses should not over rely on data monitoring tools to secure sensitive information. Instead, it is more important than ever to enact and enforce up-to-date confidentiality policies. Employees may not understand that workplace confidentiality policies extend to communications on personal devices. Employees should be reminded to treat text messages like public in-person conversations and refrain from discussing confidential information on text message apps—even when conversing with clients or business associates. Employees should also be informed of the extensive damage a data breach can cause. Additionally, while employees may prefer using encrypted texting apps like WhatsApp and iMessage, businesses should consider offering internal messaging programs and encouraging their use for all work-related communications. Now more than ever, training employees to maintain confidentiality and make smart decisions is the most effective method of preventing leaks.

© 2017 Vedder Price

The Department Of Homeland Security Proposes New Rules Affecting Federal Government Contractors

This week, the Department of Homeland Security (“DHS”) issued three proposed rules expanding data security and privacy requirements for contractors and subcontractors. The proposed rules build upon other recent efforts by various federal agencies to strengthen safeguarding requirements for sensitive government information.  Given the increasing emphasis on data security and privacy, contractors and subcontractors are well advised to familiarize themselves with these new requirements and undertake a careful review of their current data security and privacy procedures to ensure they comply.

  • Privacy Training

DHS contracts currently require contractor and subcontractor employees to complete privacy training before accessing a Government system of records; handling Personally Identifiable Information and/or Sensitive Personally Identifiable Information; or designing, developing, maintaining, or operating a Government system of records. DHS proposes including this training requirement in the Homeland Security Acquisition Regulation (“HSAR”) and to make the training more easily accessible by hosting it on a public website.  By including the rule in the HSAR, DHS would standardize the obligation across all DHS contracts.  The new rule would require the training to be completed within thirty days of the award of a contract and on an annual basis thereafter.

DHS invites comment on the proposed rule. In particular, DHS asks commenters to offer their views on the burden, if any associated with the requirement to complete DHS-developed privacy training.  DHS also asks whether the industry should be given the flexibility to develop its own privacy training.  Comments must be submitted on or before March 20, 2017.

  • Information Technology Security Awareness Training

DHS currently requires contractor and subcontractor employees to complete information technology security awareness training before accessing DHS information systems and information resources. DHS proposes to amend the HSAR to require IT security awareness training for all contractor and subcontractor employees who will access (1) DHS information systems and information resources or (2) contractor owned and/or operated information systems and information resources capable of collecting, processing, storing or transmitting controlled unclassified information (“CUI”) (defined below).  DHS will require employees to undergo training and to sign DHS’s Rules of Behavior (“RoB”) before they are granted access to those systems and resources.  DHS also proposes to make this training and the RoB more easily accessible by hosting them on a public website.  Thereafter, annual training will be required.  In addition, contractors will be required to submit training certification and signed copies of the RoB to the contracting officer and maintain copies in their own records.

Through this proposed rule, DHS intends to require contractors to identify employees who will require access, to ensure that those employees complete training before they are granted access and annually thereafter, to provide to the government and maintain evidence that training has been conducted. Comments on the proposed rule are due on or before March 20, 2017.

  • Safeguarding of Controlled Unclassified Information

DHS’s third proposed rule will implement new security and privacy measures, including handling and incident reporting requirements, in order to better safeguard CUI. According to DHS, “[r]ecent high-profile breaches of Federal information further demonstrate the need to ensure that information security protections are clearly, effectively, and consistently addressed in contracts.”  Accordingly, the proposed rule – which addresses specific safeguarding requirements outlined in an Office of Management and Budget document outlining policy on managing government data – is intended to “strengthen[] and expand[]” upon existing HSAR language.

DHS’s proposed rule broadly defines “CUI” as “any information the Government creates or possesses, or an entity creates or possesses for or on behalf of the Government (other than classified information) that a law, regulation, or Government-wide policy requires or permits an agency to handle using safeguarding or dissemination controls[,]” including any “such information which, if lost, misused, disclosed, or, without authorization is accessed, or modified, could adversely affect the national or homeland security interest, the conduct of Federal programs, or the privacy of individuals.” The new safeguarding requirements, which apply to both contractors and subcontractors, include mandatory contract clauses; collection, processing, storage, and transmittal guidelines (which incorporate by reference any existing DHS policies and procedures); incident reporting timelines; and inspection provisions. Comments on the proposed rule are due on or before March 20, 2017.

  • Other Recent Efforts To Safeguard Contract Information

DHS’s new rules follow a number of other recent efforts by the federal government to better control CUI and other sensitive government information.

Last fall, for example, the National Archives and Record Administration (“NARA”) issued a final rule standardizing marking and handling requirements for CUI. The final rule, which went into effect on November 14, 2016, clarifies and standardizes the treatment of CUI across the federal government.

NARA’s final rule defines “CUI” as an intermediate level of protected information between classified information and uncontrolled information.  As defined, it includes such broad categories of information as proprietary information, export-controlled information, and certain information relating to legal proceedings.  The final rule also makes an important distinction between two types of systems that process, store or transmit CUI:  (1) information systems “used or operated by an agency or by a contractor of an agency or other organization on behalf of an agency”; and (2) other systems that are not operated on behalf of an agency but that otherwise store, transmit, or process CUI.

Although the final rule directly applies only to federal agencies, it directs agencies to include CUI protection requirements in all federal agreements (including contracts, grants and licenses) that may involve such information.  As a result, its requirements indirectly extend to government contractors.  At the same time, however, it is likely that some government contractor systems will fall into the second category of systems and will not have to abide by the final rule’s restrictions.  A pending FAR case and anticipated forthcoming FAR regulation will further implement this directive for federal contractors.

Similarly, last year the Department of Defense (“DOD”), General Services Administration, and the National Aeronautics and Space Administration issued a new subpart and contract clause (52.204-21) to the FAR “for the basic safeguarding of contractor information systems that process, store, or transmit Federal contract information.”  The provision adds a number of new information security controls with which contractors must comply.

DOD’s final rule imposes a set of fifteen “basic” security controls for covered “contractor information systems” upon which “Federal contract information” transits or resides.  The new controls include: (1) limiting access to the information to authorized users; (2) limiting information system access to the types of transactions and functions that authorized users are permitted to execute; (3) verifying controls on connections to external information systems; (4) imposing controls on information that is posted or processed on publicly accessible information systems; (5) identifying information system users and processes acting on behalf of users or devices; (6) authenticating or verifying the identities of users, processes, and devices before allowing access to an information system; (7) sanitizing or destroying information system media containing Federal contract information before disposal, release, or reuse; (8) limiting physical access to information systems, equipment, and operating environments to authorized individuals; (9) escorting visitors and monitoring visitor activity, maintaining audit logs of physical access, and controlling and managing physical access devices; (10) monitoring, controlling, and protecting organizational communications at external boundaries and key internal boundaries of information systems; (11) implementing sub networks for publically accessible system components that are physically or logically separated from internal networks; (12) identifying, reporting, and correcting information and information system flaws in a timely manner; (13) providing protection from malicious code at appropriate locations within organizational information systems; (14) updating malicious code protection mechanisms when new releases are available; and (15) performing periodic scans of the information system and real-time scans of files from external sources as files are downloaded, opened, or executed.

“Federal contract information” is broadly defined to include any information provided by or generated for the federal government under a government contract.  It does not, however, include either:  (1) information provided by the Government to the public, such as on a website; or (2) simple transactional information, such as that needed to process payments.  A “covered contractor information system” is defined as one that is:  (1) owned or operated by a contractor; and (2) “possesses, stores, or transmits” Federal contract information.

ARTICLE BY Connie N BertramAmy Blackwood & Emilie Adams of Proskauer Rose LLP

Law Firm Data Breaches: Big Law, Big Data, Big Problem

law firm data breachesThe Year of the Breach

2016 was the year that law firm data breaches landed and stayed squarely in both the national and international headlines. There have been numerous law firm data breaches involving incidents ranging from lost or stolen laptops and other portable media to deep intrusions exposing everything in the law firm’s network. In March, the FBI issued a warning that a cybercrime insider-trading scheme was targeting international law firms to gain non-public information to be used for financial gain. In April, perhaps the largest volume data breach of all time involved law firm Mossack Fonesca in Panama. Millions of documents and terabytes of leaked data aired the (dirty) laundry of dozens of companies, celebrities and global leaders. Finally, Chicago law firm, Johnson & Bell Ltd., was in the news in December when a proposed class action accusing them of failing to protect client data was unsealed.

A Duty to Safeguard

Law firms are warehouses of client information and how that information is protected is being increasingly regulated and scrutinized. The legal ethics rules require attorneys to take competent and reasonable measures to safeguard information relating to client. (ABA Model Rules 1.1, 1.6 and Comments). Attorneys also have contractual and regulatory obligations to protect information relating to clients and other personally identifiable information, financial and health, for example.

American Bar Association’s 2016 TechReport

Annually, the ABA conducts a Legal Technology Survey (Survey) to gauge the state of our industry vis-à-vis technology and data security. The Survey revealed that the largest firms (500 or more attorneys) reported experiencing the most security breaches, with 26% of respondents admitting they had experienced some type of breach. This is a generally upward trend from past years and analysts expect this number only to rise. This is likely because larger firms have more people, more technology and more data so there is a greater exposure surface and many more risk touch-points.

Consequences of Breach

The most serious consequence of a law firm security breach is loss or unauthorized access to sensitive client data. However, the Survey shows there was a low incidence of this, only about 2% of breaches overall resulted in loss of client data. Other concerning consequences of the breaches are significant though. 37% reported business downtime/loss of billable hours, 28% reported hefty fees for correction including consulting fees, 22% reported costs associated with having to replace hardware/software, and 14% reported loss of important files and information.

Employing & Increasing Safeguards Commonly Used in other Industries

The 2016 Survey shows that while many law firms are employing some safeguards and generally increasing and diversifying their use of those safeguards, our industry may not be using common security measures that other industries employ.

1. Programs and Policies. The first step of any organization in protecting its data is establishing a comprehensive data security program. Security programs should include measures to prevent breaches (like policies that regulate the use of technology) and measures to identify, protect, detect, respond to and recover from data breaches and security incidents. Any program should designate an individual, like a full-time privacy officer or information security director, who is responsible for coordinating security. However, the numbers show that the legal industry may not be up to speed on this basic need. Survey respondents reported their firms had the following documented policies:

Document or records management and retention policy: 56%

Email use policy: 49%

Internet use/computer use policy: 41%

Social media use: 34%

2. Assessments. Using security assessments conducted by independent third parties has been a growing security practice for other industries; however, law firms have been slow to adopt this security tool, with only 18% of law firms overall reporting that they had a full assessment.

3. Standards/Frameworks. Other industries use security standards and frameworks, like those published by the International Organization for Standardization (ISO) to provide approaches to information security programs or to seek formal security certification from one of these bodies. Overall, only 5% of law firms reported that they have received such a certification.

4. Encryption. Security professionals view encryption as a basic safeguard that should be widely deployed and it is increasingly being required by law for any personal information; however only 38% of overall respondents reported use of file encryption and only 15% use drive encryption. Email encryption has become inexpensive for businesses and easier to use with commercial email services yet overall only 26% of respondents reported using email encryption with confidential/privileged communications or documents sent to clients.

5. Cybersecurity Insurance. Many general liability and malpractice polices do not cover security incidents or data breaches, thus there is an increasing need for business to supplement their coverage with cybersecurity insurance. Unfortunately, only 17% of attorneys reported that they have cyber coverage.

Conclusion

It is important to note that the figures revealed by the 2016 Survey, while dismaying, may also be extremely conservative as law firms have a vested interest in keeping a breach of their client’s data as quiet as possible. There is also the very real possibility that many firms don’t yet know that they have been breached. The 2016 Survey demonstrates that there is still a lot of room for improvement in the privacy and data security space for law firms. As law firms continue to make the news for these types of incidents it is likely that improvement will come sooner rather than later.

2016 Cybersecurity Year in Review, and Data Privacy Trends to Watch in 2017

cybersecurity data privacyWith 2016 in the rear-view mirror, we have been reflecting on the many data privacy and cybersecurity legal developments of the past year, both in the U.S. and internationally, as well as focusing on trends to watch in the new year. With best wishes for a Happy New Year from all of us, we present a number of highlights from 2016, and suggest a few areas to watch in 2017.

U.S. Courts Wrestle With Law Enforcement Access to Data

Debate over law enforcement access to data stored by technology companies was perhaps the most visible privacy and cybersecurity issue of 2016, with far-reaching implications in both the U.S. and abroad. In July, the Second Circuit issued a decision in Microsoft’s challenge to a warrant issued under the Electronic Communications Privacy Act (ECPA), seeking email content stored in Ireland. The Second Circuit unanimously held that ECPA warrants cannot compel U.S. providers to disclose the contents of customer communications stored on foreign servers. In 2017, we expect that decision to have significant implications for U.S. technology companies, as well as consumers and companies that store data with U.S.-based providers. The government has sought rehearing en banc, and also has indicated that it intends to submit legislation to Congress to address the implications of the decision.  Congress has considered related issues in the International Communications Privacy Act.

Apple also engaged in a high-profile court battle with the government early in 2016 when the company refused the FBI’s request to unlock a terror suspect’s iPhone, though the dispute ended in March without a court decision when the FBI announced it had accessed the device without Apple’s assistance.  Congress continues to grapple with the consequences of that case to include considering several encryption-related legislative proposals.

U.S. Supreme Court Addresses Privacy Standing in Spokeo

The U.S. Supreme Court issued its highly anticipated decision in Spokeo in May, addressing whether plaintiffs have standing to pursue statutory damages even in the absence of harm under the Fair Credit Reporting Act (FCRA). The Court reaffirmed that constitutional standing in federal court requires “concrete” (i.e., actual) harm and offered several guiding principles to assist lower courts in determining whether standing requirements have been met.  Although the case specifically dealt with the FCRA, Spokeo has significant implications in privacy and data breach litigation because numerous federal privacy laws have been construed to allow statutory damages even in the absence of actual harm.  Lower courts have begun applying the decision in data breach cases, including a recent district court ruling that a named plaintiff’s allegations that stolen personal information was used to file a false tax return were sufficient to impart standing under Spokeo.  In 2017, we expect this process to continue, as lower courts continue to interpret the Supreme Court’s decision.

A New Framework for EU-U.S. Data Transfers

The EU-U.S. Privacy Shield, a new framework for the transfer of personal data between the EU and the U.S., was announced in February and finalized in July.  Negotiators in the EU and U.S. worked on an accelerated timeline following the invalidation of the Safe Harbor in late 2015 resulting in the Privacy Shield—a significantly more stringent framework than its predecessor.  Companies began self-certifying adherence to the Privacy Shield in August, and as of this post more than 1,300 companies have signed up at the Department of Commerce’s website.  In 2017, we see continued uncertainty in this area.  The Privacy Shield faces a legal challenge in the European Court of Justice, and another cross-border mechanism—standard contractual clauses—also is subject to an EU court action.  The Privacy Shield itself was based, in part, on an exchange of letters between the Obama Administration and the European Commission relating to mass surveillance, and it remains to be seen if the Trump Administration will continue the commitments made in those letters.  Relatedly, the European Parliament approved the EU-U.S. Umbrella Agreement in December—a framework for the exchange of personal data for law-enforcement (including anti-terrorism) purposes between the EU and U.S.

Sweeping New Data Protection Laws Approved in Europe

The European Parliament passed into law the General Data Protection Regulation (GDPR) in April, a sweeping new set of privacy and data security rules that will take effect in mid-2018.  Unlike the EU Data Protection Directive which it replaces, the GDPR for the most part will have direct effect throughout the EU without requiring national implementation legislation.  Companies doing business in (or with companies operating in) the EU have begun preparing for compliance with the new requirements, and the Article 29 Working Party released the first set of guidance on the GDPR in December.  In 2017, we expect the Article 29 Working Party to continue to fill in some of the blanks left in the GDPR, and we also expect companies to intensify their preparation for the mid-2018 effective date of this landmark legislation.

FTC’s Data Security Authority Tested (Again) in LabMD

 Following the Third Circuit’s decision affirming the FTC’s authority to regulate corporate data security in Wyndham last year, the FTC sought to further bolster its data security authority in LabMD.  In July, the Commission unanimously vacated a prior Administrative Law Judge decision and found that LabMD’s actions were “unfair” under Section 5 of the FTC Act.  In November, however, the Eleventh Circuit stayed enforcement of the FTC’s LabMD order, finding that LabMD was likely to succeed on the merits because the FTC’s interpretations of aspects of the FTC Act relating to its data security authority were likely not reasonable. The case will now proceed on the merits, but the grant of the stay suggests that the Eleventh Circuit may be receptive to LabMD’s arguments for ultimate reversal of the LabMD order.  This could produce a circuit split between the Eleventh Circuit and the Third Circuit (which decided the Wyndham case), and thereby provide a basis for an attempt to secure Supreme Court review of the FTC’s jurisdiction.  Moreover, this case could provide a vehicle for a new FTC, with a Republican majority, to reconsider the agency’s current aggressive approach on “unfairness” as applied to data security.

Newly Established Cybersecurity Requirements and Guidelines

A number of U.S. states and standard-setting organizations issued broadly applicable cybersecurity requirements and guidelines in 2016.  In February, as part of the release of its 2016 Data Breach Report, the Office of the Attorney General for California established a de facto standard that companies doing business in California must, at a minimum, adopt twenty specific security controls established by the Center for Internet Security in order to have “reasonable” security practices in California.  And New York State proposed first-in-the-nation cybersecurity regulations that contain several mandatory security requirements for financial services institutions—those institutions that are regulated by New York banking, insurance, or financial services laws—which are currently being revised following industry comments and are scheduled to take effect in March 2017.

At the federal level, in October, the Department of Defense (DoD) finalized its safeguarding and cyber incident reporting obligations, requiring DoD contractors to implement specific security controls for information systems that store, process, or transmit DoD’s data and to report actual or possible cybersecurity incidents involving such data to DoD within 72 hours.  And in the coming year, similar security controls and reporting requirements will likely be required for all government contractors, as a September rule promulgated by the National Archives and Record Administration (NARA) set the stage for a Federal Acquisition Regulation (FAR) clause that will likely mirror DoD’s requirements.  In November, the National Institute of Standards and Technology (NIST) released guidance for small businesses on cybersecurity preparedness, including a list of “recommended practices” that are applicable not just to small businesses, but entities of all sizes.

New Cybersecurity and Privacy Laws and Regulations in China

As expected, authorities in China were active in passing a new Cybersecurity Law and proposing new cybersecurity and privacy regulations in 2016.  In November, the Standing Committee of China’s National People’s Congress passed China’s first Cybersecurity Law (the “Law”), which will take effect starting June 1, 2017.  Described as China’s “fundamental law” in the area of cybersecurity, the new Law articulates the government’s priorities with respect to “cyberspace sovereignty,” consolidates existing network security-related requirements (covering both cyber and physical aspects of networks), and grants government agencies greater power to regulate cyber activities.  It is the first Chinese law that systematically lays out the regulatory requirements on cybersecurity, subjecting many previously under-regulated or unregulated activities in cyberspace to government scrutiny.  At the same time, it seeks to balance the dual goals of enhancing cybersecurity and developing China’s digital economy, which relies heavily on the free flow of data.

China’s National Information Security Standardization Technical Committee (NISSTC) drafted a Personal Information Security Standard, a non-binding standard for data privacy and security practices of companies operating in China.  The NISSTC also released seven draft standards for comment in December, with a public comment period running until February 2, 2017.  The Cyberspace Administration of China (CAC) has also been active in 2016, issuing new rules for mobile apps in July, and draft regulations aimed at protecting minors in cyberspace in October. Finally, in August China’s State Administration of Industry and Commerce (SAIC) released draft regulations for public comment that would amend consumer protection laws to, among other things, supplement existing privacy obligations for companies operating in China.

FCC Releases Broadband Privacy Rules

The FCC’s increasing focus on privacy issues continued in 2016 with the release of broadband privacy rules.  The new rules, which were formally proposed in April, regulate the privacy practices of broadband Internet Service Providers (ISPs), including requirements to obtain consent for certain uses of consumer data and to adhere to certain data security practices.   The rules were adopted by the Commission in a 3-2 party-line vote in October, so their fate is quite uncertain under the incoming Republican administration.  Given that petitions for reconsideration currently are pending before the FCC and will remain so until the change in Administration, these rules could be one of the first areas in which the new FCC makes its mark on the policies of the Obama-era Commission.

Connected Devices and The Internet of Things

2016 saw several developments relating to the Internet of Things (IoT), such as internet-connected refrigerators and thermostats, which present unique opportunities and challenges from a privacy and cybersecurity perspective.  In April, the U.S. Department of Commerce issued a request for public comment on the benefits, challenges, and potential government roles for IoT, and the U.S. Senate Commerce Committee approved a bill (which remains pending) to establish a working group to study and facilitate IoT growth.  Around the same time, the European Commission released a series of industry-related initiatives addressing IoT, among other things.  And in November, NIST released cybersecurity guidance for IoT, and the Broadband Internet Technical Advisory Group released another report detailing the unique security and privacy challenges posed by IoT.  In 2017, we expect the focus on connected devices to escalate, particularly given the emergence of driverless cars and other innovative technologies.

Privacy and Data Security in the Trump Administration

data breach, privacyPrivacy and data security issues were prominent in the campaign. Allegations were even made that Russia was behind the DNC hack.

Despite it being front and center in the campaign, cybersecurity did not generate specific policies from the Trump campaign. One thing Donald Trump did promise was a top to bottom review of US cyber defense and security led by government, law enforcement, and private sector experts.  He also committed to establishing a Justice Department task force to coordinate responses to cyber attacks and a cyber review team to audit existing government IT systems.

Another area on which the President-elect spoke was the need to clamp down on the theft of US intellectual property, especially by foreign nations and competitors. Tools already exist to do that, of course: Economic Espionage Act of 1996.  Congress, which earlier this year enacted the Defend Trade Secrets Act, is likely to respond favorably to any additional resources or authorities the new administration might seek for this purpose.

Related to cyber security were Mr. Trump’s comments on encryption during Apple’s dispute with the Justice Department in the wake of the San Bernardino terrorist attack. Trump sided strongly with law enforcement, and we can expect Congress to return to the subject of encryption in the coming session.  Whether anything happens legislatively is uncertain, and some in Congress want to await the pending report of the National Academy of Science on encryption, which will remain a highly contentious issue.  Still, Candidate Trump’s comments show where he stands.  One wildcard in the debate may be how weakened is FBI Director Jim Comey, who has been leading the charge on encryption issues for law enforcement.

Also due for legislative consideration in 2017 is the renewal of section 702 surveillance authority under the FISA Amendments Act, which is due to sunset at the end of the year. Trump is likely to take a much more pro-surveillance position than either the current administration or Secretary Clinton might have taken.  Privacy advocates in both parties are likely to press for changes in the law, but at this point the odds would be against them.

Either on its own or in conjunction with the section 702 debate, Congress is likely to return to consideration of ECPA reform. The House passed the E-mail Privacy Act unanimously this Congress, but it stalled in the Senate due to privacy groups’ opposition to an amendment sought by Senator Cornyn.  The must-pass section 702 legislation is likely to provide a vehicle for e-mail privacy and related ECPA reform legislation if it does not move on its own.

Also in the mix on these issues is consideration of legislation clarifying and modernizing how domestic law enforcement accesses data across national borders. Legislation addressing that issue enjoys prominent support in Congress and may well get taken up in conjunction with ECPA reform or get lumped in with that in the context of section 702 renewal.

And the House Judiciary Committee is already moving ahead with a hearing scheduled to consider protecting geolocation data, setting up another area of dispute between law enforcement and privacy advocates.

Also in the mix legislatively will be proposals on how firms deal with data breaches and theft of information. The recently disclosed hack of Yahoo and the DNC hack have again raised the profile of data breach issues.  While there is consensus that something should be done, disagreement remains on the details, including whether a federal law should preempt state data breach laws.  There is little reason to expect that the disagreements can be bridged or that legislation will in fact move forward.

Finally and briefly, among other issues that Congress is likely to look at, though on which a legislative solution is unlikely are:

1) how to address distributed denial of service attacks, and the inter-related topic of the growth of the Internet of Things, on which several committees have already scheduled hearings in the wake of the recent significant DDOS attack. At this stage, Congress is likely to seek to continue to build its level of understanding of the issues here rather than act on anything;

2) how to address the recruitment of terrorists and the spread of violent extremism through social media; and

3) the implementation of last year’s Cybersecurity Information Sharing Act by the Department of Homeland Security.

One final point: the key players on these issues are likely to remain the same. One possible change would have Senate Judiciary ranking member Pat Leahy, just reelected, move to become ranking member of the Appropriations Committee, which could open the door for Senator Feinstein to become ranking member of the Judiciary Committee.  She would be more sympathetic to law enforcement and less aligned with the privacy advocates than Senator Leahy has been.  However, her move might allow tech-friendly Senator Mark Warner to become vice chairman of the Intelligence Committee, of which Senator Richard Burr will remain as chairman after his reelection.

© 2016 Covington & Burling LLP

President Donald J. Trump – What Lies Ahead for Privacy, Cybersecurity, e-Communication?

President TrumpFollowing a brutal campaign – one laced with Wikileaks’ email dumps, confidential Clinton emails left unprotected, flurries of Twitter and other social media activity – it will be interesting to see how a Trump Administration will address the serious issues of privacy, cybersecurity and electronic communications, including in social media.

Mr. Trump had not been too specific with many of his positions while campaigning, so it is difficult to have a sense of where his administration might focus. But, one place to look is his campaign website where the now President-elect outlined a vision, summarized as follows:

  • Order an immediate review of all U.S. cyber defenses and vulnerabilities by individuals from the military, law enforcement, and the private sector, the “Cyber Review Team.”

  • The Cyber Review Team will provide specific recommendations for safeguarding with the best defense technologies tailored to the likely threats.

  • The Cyber Review Team will establish detailed protocols and mandatory cyber awareness training for all government employees.

  • Instruct the U.S. Department of Justice to coordinate responses to cyber threats.

  • Develop the offensive cyber capabilities we need to deter attacks by both state and non-state actors and, if necessary, to respond appropriately.

There is nothing new here as these positions appear generally to continue the work of prior administrations in the area of cybersecurity. Perhaps insight into President-elect Trump’s direction in these areas will be influenced by his campaign experiences.

Should we expect a tightening of cybersecurity requirements through new statutes and regulations?

Mr. Trump has expressed a desire to reduce regulation, not increase it. However, political party hackings and unfavorable email dumps from Wikileaks, coupled with continued data breaches affecting private and public sector entities, may prompt his administration and Congress to do more. Politics aside, cybersecurity clearly is a top national security threat, and it is having a significant impact on private sector risk management strategies and individual security. Some additional regulation may be coming.

An important question for many, especially for organizations that have suffered a multi-state data breach, is whether we will see a federal data breach notification standard, one that would “trump” the current patchwork of state laws. With Republicans in control of the executive and legislative branches, at least for the next two years, and considering the past legislative activity in this area, a federal law on data breach notification that supersedes state law does not seem likely.

Should we expect an expansion of privacy rights or other protections for electronic communication such as email or social media communication?

Again, much has been made of the disclosure of private email during the campaign, and President-elect Trump is famous (or infamous) for his use of social media, particularly his Twitter account. For some time, however, many have expressed concern that federal laws such as the Electronic Communications Privacy Act and the Stored Communications Act are in need of significant updates to address new technologies and usage, while others continue to have questions about the application of the Communications Decency Act. We also have seen an increase in scrutiny over the content of electronic communications by the National Labor Relations Board, and more than twenty states have passed laws concerning the privacy of social media and online personal accounts. Meanwhile, the emergence of Big Data, artificial intelligence, IoT, cognitive computing and other technologies continue to spur significant privacy questions about the collection and use of data.

While there may be a tightening of the rules concerning how certain federal employees handle work emails, based on what we have seen, it does not appear at this point that a Trump Administration will make these issues a priority for the private sector.

We’ll just have to wait and see.

Jackson Lewis P.C. © 2016

Location Data Gathering Under Europe’s New Privacy Laws

Why are EU regulators particularly concerned about location data?

Location-specific data can reveal very specific and intimate details about a person, where they go, what establishments they frequent and what their habits or routines are. Some location-specific data garners heightened protections, such as where and how often a person obtains medical care or where a person attends religious services.

In the U.S., consumers typically agree to generalized privacy policies by clicking a box prior to purchase, download or use of a new product or service. But the new EU regulations may require more informed notice and consent be obtained for each individual use of the data that a company acquires. For example, a traffic app may collect location data to offer geographically-focused traffic reports and then also use that data to better target advertisements to the consumer, a so-called “secondary use” of the data.

The secondary use is what is concerning to EU regulators. They want to give citizens back control over their personal data, which means meaningfully and fully informing them of how and when it is used. For example, personal data can only be gathered for legitimate purposes, meaning companies should not continue to collect location data beyond what is necessary to support the functionality of their business model; also additional consent would need to be obtained each time the company wants to re-purpose or re-analyze the data they have collected. This puts an affirmative obligation on companies to know if, when and how their partners are using consumer data and to make sure such use has been consented to by the consumer.

What should a company do that collects location data in the EU? 

  1. Consumers should be clearly informed about what location information is being gathered and how it will be used, this does not just mean the primary use of the data, but any ancillary uses such as to target advertisements, etc.;

  2. Consumers should be given the opportunity to decline to have their data collected, or to be able to “opt-out” of any of the primary or secondary uses of their data;

  3. Companies need to put a mechanism in place to make consumers aware if the company’s data collection policies change, for example, a company may not have a secondary use for the data now, but in 2 years it plans on packaging and reselling that data to an aggregator; and

  4. Companies must have agreements in place with their partners in the “business ecosystem” to ensure their partners are adhering to the data collection permissions that the company has obtained.

© Polsinelli PC, Polsinelli LLP in California

Guidance on Ransomware Attacks under HIPAA and State Data Breach Notification Laws

ransomwareOn July 28, 2016, US Department of Health and Human Services (HHS) issued guidance (guidance) under the Health Insurance Portability and Accountability Act (HIPAA) on what covered entities and business associates can do to prevent and recover from ransomware attacks. Ransomware attacks can also trigger concerns under state data breach notification laws.

What Is Ransomware?

Ransomware is a type of malware (malicious software). It is deployed through devices and systems through spam, phishing messages, websites and email attachments, or it can be directly installed by an attacker who has hacked into a system. In many instances, when a user clicks on the malicious link or opens the attachment, it infects the user’s data. Ransomware attempts to deny access to a user’s data, usually by encrypting the data with a key known only to the hacker who deployed the malware. After the user’s data is encrypted, the ransomware attacker directs the user to pay a ransom in order to receive a decryption key. However, the attacker may also deploy ransomware that destroys or impermissibly transfers information from an information system to a remote location controlled by the attacker. Paying the ransom may result in the attacker providing the key necessary needed to decrypt the information, but it is not guaranteed. In 2016, at least four hospitals have reported attacks by ransomware, but additional attacks are believed to go unreported.

HIPAA Security Rule and Best Practices

The HIPAA Security Rule requires covered entities and business associates to implement security measures. It also requires covered entities and business associates to conduct an accurate and thorough risk analysis of the potential risks and vulnerabilities to the confidentiality, integrity and availability of electronic protected health information (ePHI) the entities create, receive, maintain or transmit and to implement security measures sufficient to reduce those identified risks and vulnerabilities to a reasonable and appropriate level. The HIPAA Security Rule establishes a floor for the security of ePHI, although additional and/or more stringent security measures are certainly permissible and may be required under state law. Compliance with HIPAA’s existing requirements provides covered entities and business associates with guidance on how to prevent and address breaches that compromise protected health information. The new HIPAA guidance specific to ransomware reinforces how the existing requirements can help an entity protect sensitive information.

HHS has suggested that covered entities and business associates frequently back up their documents because ransomware denies access to the covered entity’s and business associate’s data. Maintaining frequent backups and ensuring the ability to recover data from a separate backup source is crucial to recovering from a ransomware attack. Test restorations should be periodically conducted to verify the integrity of backed-up data and provide confidence in an organization’s data restoration capabilities. Because some ransomware variants have been known to remove or otherwise disrupt online backups, entities should consider maintaining backups offline and inaccessible from their networks.

Covered entities and business associates should also install malicious software protections and educate its workforce members on data security practices that can reduce the risk of ransomware, including how to detect malware-type emails, the importance of avoiding suspicious websites and complying with sound password policies.

Lastly, each covered entity or business associate should ensure that its incident response plan addresses ransomware incidents. Many entities have crafted their policies and incident response plans to focus on other more typical daily personal information risks, such as the lost laptop or personal device. A ransomware event should expressly trigger the activities required by the incident response plan, including the requirement to activate the response team, initiate the required investigation, identify appropriate remediation, determine legal and regulatory notification obligations, and conduct post-event review.

Indications of a Ransomware Attack

Indicators of a ransomware attack could include:

  • The receipt of an email from an attacker advising that files have been encrypted and demanding a ransom in exchange for the decryption key
  • A user’s realization that a link that was clicked on, a file attachment opened or a website visited may have been malicious in nature
  • An increase in activity in the central processing unit (CPU) of a computer and disk activity for no apparent reason (due to the ransomware searching for, encrypting and removing data files)
  • An inability to access certain files as the ransomware encrypts, deletes and renames and/or relocates data
  • Detection of suspicious network communications between the ransomware and the attackers’ command and control server(s) (this would most likely be detected by IT personnel via an intrusion detection or similar solution)

What to Do if Subject to a Ransomware Attack?

A covered entity or business associate that is subject to a ransomware attack may find it necessary to activate its contingency or business continuity plans. Once the contingency or business continuity plan is activated, an entity will be able to continue its day-to-day business operations while continuing to respond to, and recover from, a ransomware attack. The entity’s robust security incident procedures for responding to a ransomware attack should include the following processes to:

Activate the entity’s incident response plan and follow its requirements;

  • Notify the entity’s cyber liability insurer as soon as enough information is available to indicate a possible ransomware attack and within any time period required under the applicable policy;
  • Detect and conduct an analysis of the ransomware, determining the scope of the incident and identifying what networks, systems or applications are affected;
  • Determine the origin of the incident (who/what/where/when), including how the incident occurred (e.g., tools and attack methods used, vulnerabilities exploited);
  • Determine whether the incident is finished, is ongoing or has propagated additional incidents throughout the environment;
  • Contain and eradicate the ransomware and mitigate or remediate vulnerabilities that permitted the ransomware attack and propagation;
  • Recover from the ransomware attack by restoring data lost during the attack and returning to “business-as-usual” operations; and
  • Conduct post-incident activities, which could include a deeper analysis of the evidence to determine if the entity has any regulatory, contractual or other obligations as a result of the incident (such as providing notification of a breach of protected health information), and incorporating any lessons learned into the overall security management process of the entity to improve incident response effectiveness for future security incidents.

Additionally, it is recommended that an entity infected with ransomware consult, early on, with legal counsel who can assist with reporting the incident to the extent it is a criminal matter to law enforcement. Counsel frequently have ongoing contacts within the cybercrime units of the Federal Bureau of Investigation (FBI) or the United States Secret Service that may deploy appropriate resources to address the matter and to supply helpful information. These agencies work with federal, state, local and international partners to pursue cyber criminals globally and assist victims of cybercrime. Counsel can advise on the type of information appropriate to disclose to law enforcement, while taking steps to establish and maintain the attorney-client privilege and, if appropriate, the attorney work product protection. Counsel also can assist in preparing communications (e.g., mandatory notifications and reports to senior executives and boards), advise on potential legal exposure from the incident and provide representation in connection with government inquiries or litigation.

If Ransomware Infects a Covered Entity’s or a Business Associate’s Computer System, Is It a Per Se HIPAA Breach?

Not necessarily. Whether or not the presence of ransomware would be a breach under the HIPAA Privacy Rule or HIPAA Security Rule (the HIPAA Rules) is a fact-specific determination. A breach under the HIPAA Rules is defined as, “…the acquisition, access, use or disclosure of PHI in a manner not permitted under the [HIPAA Privacy Rule] which compromises the security or privacy of the PHI.” A covered entity or business associate should, however, perform a risk assessment after experiencing a ransomware incident to determine if a reportable breach has occurred and to determine the appropriate mitigating action.

If the ePHI was encrypted prior to the incident in accordance with the HHS guidance, there may not be a breach if the encryption that was in place rendered the affected PHI unreadable, unusable and indecipherable to the unauthorized person or people. If, however, the ePHI is encrypted by the ransomware attack, a breach has occurred because the ePHI encrypted by the ransomware was acquired (i.e., unauthorized individuals have taken possession or control of the information), and thus is a “disclosure” not permitted under the HIPAA Privacy Rule.

Thus, in order to determine if the information was acquired and accessed in the incident, additional analysis will be required. Unless the covered entity or business associate can demonstrate that there is a “[l]ow probability that the PHI has been compromised,” based on the factors set forth in the HIPAA breach notification rule, a breach of PHI is presumed to have occurred. If a breach has occurred, the entity must comply with the applicable breach notification provisions under HIPAA and, if applicable, state law.

Does a Ransomware Event Trigger State Data Breach Notification Obligations?

Possibly. In a majority of states, data breach notification requirements are triggered when there is both “unauthorized access” to and “acquisition” of personally identifiable information. Whether a ransomware event meets the access and acquisition elements of these statutes is, as in the HIPAA analysis, a fact-specific determination. If, for example, the hackers were able to move the personally identifiable information from the entity’s network to their own, it is clear that the hackers achieved unauthorized access to and acquisition of the information. State data breach notification laws pertaining to the affected individuals would need to be analyzed and factored into the entity’s overall notification requirements.

Ransomware though is usually designed to extort money from victim entities rather than steal personally identifiable information. If the forensics team can present credible evidence that no personally identifiable information was acquired by the hackers, then these obligations may not be triggered. The forensics team, consistent with the incident response team requirements, should document findings that support a defensible decision under these statutes, in case of a subsequent regulatory investigation or litigation, not to notify affected individuals.

In a minority of states, the data breach notification requirements are triggered when there is simply “unauthorized access” to personally identifiable information. This lower standard may mean that the entity must notify its customers of a data breach even when no personally identifiable information is acquired by a hacker. Entities that maintain personally identifiable information of residents of Connecticut, New Jersey and Puerto Rico, for example, may find themselves in the unfortunate position of having to provide data breach notifications even when the information is not acquired by a hacker.

Finally, if the entity is providing services to a business customer, it will need to determine whether it is obligated to notify the business customer (as owner of the affected personal information) of the ransomware attack, taking into account state data breach notification requirements, contractual obligations to notify the business customer and the overall value of the commercial relationship.

Pokémon Go – Staying Ahead of Game and Avoiding Unexpected HIPAA Risks

HIPAA RisksIt was inevitable – Pokémon Go fever has swept the nation, and now little cartoon creatures have found their way into your health care facility.

Wait, what!?

Yes, you read that right, those pesky (or beloved, depending on your point of view) creatures are popping up literally everywhere, and unfortunately hospitals and other health care facilities are no exception. As a result, in addition to keeping up with the various advances in mobile technology related to health care and patient management, health care facilities across the country must now add keeping up with virtual and augmented reality to their to-do lists.

So why should this matter to your health care facility?

Currently, industry trends suggest that hospitals and other health care facilities are taking two divergent views when it comes to this new frontier – (a) asking to be taken off the “map” (i.e., having Pokémon removed from their property), or (b) embracing the game, as it motivates the young (and old) to be active. While the latter could be tempting – and for some facilities with proper controls it could be successful – for most, we recommend taking whatever steps possible to prohibit game play within your health care facility.

Regardless of the road taken by your facility, there are a few key considerations to keep in mind when evaluating potential HIPAA risks related to virtual and augmented reality games, which are only likely to grow substantially in number in the future.

How do Pokémon Go and augmented reality games work?

On first glance, this specific game (which is fairly primitive as augmented reality) doesn’t appear problematic from a HIPAA perspective. However, there are some hidden risks. The Pokémon game’s functionality allows for a user to switch between a virtual map and camera mode which literally shows the Pokémon in the world around the player. The images seen on the player’s phone do not appear to be saved or shared automatically – however, the mobile application does offer the option of letting you take a photo of what you see from within the app. In a world dominated by social media, this is where the problem arises.

Pokémon Go and other augmented realty games allow a player to engage in a virtual game which takes place in the real world around them. Pokémon Go players are motivated to take photos of their surroundings and share them with third parties and on social media. In a health care environment, this could easily result in a player – whether patient, employee or third-party gamesman – inadvertently sharing protected health information (PHI) with all of his or her followers in as little as four clicks from taking a screenshot.

Many hospitals are already dealing with the unintended consequences of individuals playing Pokémon Go and wandering into areas containing sensitive information. Even if photographs are not taken, the mere presence of individuals who are only on premises for the purpose of playing a game heightens potential information privacy and security risks.

What is this picture worth?

Hospitals have learned the hard way the high cost of a HIPAA violation. In April of this year the Department of Health and Human Services, Office for Civil Rights (OCR) reached a $2.2 Million settlement with New York Presbyterian Hospital for the filming of “NY Med” on the premises, which resulted in the unauthorized sharing of two patients’ images. OCR also determined that the hospital failed to safeguard health information when it offered the film crew access to an environment where PHI could not be effectively protected.

OCR is likely to follow the same logic in the context of augmented reality games and the potential exposure of PHI to unauthorized parties. Having Pokémon Go players on hospital premises – including patients, visitors, employees and, most especially, those present solely for the purpose of playing the game – could lead to unnecessary HIPAA risks.

Best practices for Pokémon Go and its successors:

  • Take yourself off the “map,” but remember this is not where the story ends: To alleviate the a number of risks, you can, of course, submit an online request to Niantic Labs – the creator of Pokémon Go – to be removed as an in-game location. However, this step alone will not be sufficient to end all possible risks related to Pokémon Go, and the universe of augmented realty that could pop up next. It is also notable the removal process to be a stop has proven lengthy, therefore it would be advisable to also take additional steps regarding your stance on Pokémon Go and augmented realty games. To speed up the process, consider writing a formal demand – above and beyond the online system – to have your coordinates removed from game play.

  • Determine your stance on patient play: Aside from hospital policies on visitor and patient cell phone use, determine if your establishment wants to promote patient use of Pokémon Go. Many facilities are finding Pokémon Go to be a valuable tool in promoting exercise and activity – especially post procedures. If your hospital wants to take that approach – consider limited play to “Pokémon Zones” where PHI is less accessible and adequately protected. However, keep in mind that significant risks remain related to permitted access to PHI to unauthorized individuals.

  • Determine if health care providers and hospital staff should be prohibited from playing: Reevaluate your social media and bring-your-own-device policies to determine if augmented reality games such as Pokémon Go need to be specifically addressed. The player base of Pokémon Go appears to be growing exponentially, and it is highly unlikely that facilities’ employees are not among those playing or considering playing. While taking photographs is often prohibited in hospital settings, make sure the policy is clear that the prohibition applies to photos in the augmented reality space. Take the opportunity to clarify and reiterate acceptable social media practices. Also, if your hospital is creating “Pokémon Zones,” stress to health care providers and staff that this applies to them as well.

While Pokémon Go took over the scene almost literally overnight, this is just a glimpse of what the future holds. As augmented reality mobile applications and games become even more popular, and more immersive, these issues are bound to come up again and reinvent themselves in the form of new challenges. Now is the time to determine your organization’s policy on augmented reality and revisit social media and BYOD policies. Pokémon Go may or may not be here to stay – but it is definitely not one of a kind.

©2016 Drinker Biddle & Reath LLP. All Rights Reserved

Pokémon GO – Next Stop: Regulation & Litigation

pokemon go litigationAs everyone is aware, the Pokémon GO craze has taken the world by storm in the past month. Reports estimate there have been over 75 million downloads of the digital game since the program became available on July 6.  Apple has not issued any concrete numbers, but has confirmed that it was the most downloaded app ever in its first week of availability.

When the game was first offered, users were required to grant permission not only to use a player’s smartphone camera and location data but also to gain full access to the user’s Google accounts — including email, calendars, photos, stored documents and any other data associated with the login. The game’s creator, Niantic, responded to a public outcry – including a letter from Minnesota Senator Al Franken – stating that the expansive permission requests were “erroneous” and that Pokémon GO did not use anything from players’ accounts other than basic Google profile information.  The company has since issued a fix to reduce access only to users’ basic Google account profile information.

As is often the case, remarkable success naturally attracts critics who take aim. In a letter dated July 22, 2016, the Electronic Privacy Information Center (EPIC) wrote to the Federal Trade Commission (FTC) requesting government oversight on Niantic’s data collection practices. EPIC is a non-profit public interest research center in Washington, D.C., focusing public attention on privacy and civil liberties issues.

Niantic’s Privacy Policy

EPIC’s letter highlighted a number of alleged issues with Niantic’s privacy policy:

  • Niantic does not explain the scope of information gathered from Google profiles or why this is necessary to the function of the Pokémon GO app.

  • Niantic collects users’ precise location information through “cell/mobile tower triangulation, wifi triangulation, and/or GPS.” The Company’s Privacy Policy states Niantic will “store” location information and “some of that location information, along with your … user name, may be shared through the App.” The Privacy Policy does not indicate any limitations on how long Niantic will retain location data or explain how indefinite retention of location data is necessary to the functionality of the Pokémon GO app.

  • With Pokémon GO, Niantic has access to users’ mobile device camera. The Terms of Service for Pokémon GO grant Niantic a “nonexclusive, perpetual, irrevocable, transferable, sublicensable, worldwide, royalty-free license” to “User Content.” The Terms do not define “User Content” or specify whether this includes photos taken through the in-app camera function.

  • The Pokémon GO Privacy Policy grants Niantic wide latitude to disclose user data to “third-party service providers,” “third parties,” and “to government or law enforcement officials or private parties as [Niantic], in [its] sole discretion, believe necessary or appropriate.” Niantic also deems user data, including personally identifiable information, to be a “business asset” that it can transfer to a third party in the event the company is sold. This issue has been identified as a particular concern to another non-profit organization – Common Sense Media, an independent non-profit organization focusing on children and technology. According to Common Sense Media, location information and history of children should not be considered a “business asset.”

EPIC’s Request to the FTC

Based on the issues highlighted above, EPIC requested that the FTC use its authority to regulate unfair competition under the Federal Trade Commission Act (15 U.S.C. § 45) to prohibit practices by Niantic and other similar apps that fail to conform with FTC’s Fair Information Practices and the principles set forth in The White House 2012 report, “Consumer Data Privacy In A Networked World.”

According to EPIC, Niantic’s unlimited collection and indefinite retention of detailed location data, violates 15 U.S.C. § 45(n) because it is “likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition.”

EPIC also contends that the unlimited collection and indefinite retention of detailed location data violate the data minimization requirements under the Children’s Online Privacy Protection Act (COPPA), which requires providers to “retain personal information collected online from a child for only as long as is reasonably necessary to fulfill the purpose for which the information was collected.” 16 C.F.R. § 312.10.

Private Lawsuit Filed Against Niantic

Subsequently, a Pokémon GO user has filed suit in Florida State Court alleging that the terms of service and privacy policy are deceptive and unfair, which violates the Florida Deceptive and Unfair Trade Practices Act. Beckman v. Niantic Inc., case number 50-2016-CA-008330, Fifteenth Judicial Circuit for Palm Beach County, Florida.

Practice Pointer

The issue of consumer privacy continues to garner significant attention. Whether you are an app developer or any other company that collects and retains personal information, it is time to review your applicable policies and take appropriate steps to ensure that your company is not the subject of government agency inquiry, litigation, or a data breach.

For employers whose employees may be bumping into each other in the hallway while playing the game, consideration should be given to ban or otherwise regulate employee involvement. Certainly a drop in productively is a concern. However, even if accessing the game during work time is barred, employers should be concerned about the potential compromise to proprietary and confidential information that could occur as the result of data breaches or through counterfeit games that are designed to allow hackers access to your protected information.

Jackson Lewis P.C. © 2016