California May Be Headed Towards Sweeping Consumer Privacy Protections

On June 21st, California legislature Democrats reached a tentative agreement with a group of consumer privacy activists spearheading a ballot initiative for heightened consumer privacy protections, in which the activists would withdraw the the existing ballot initiative in exchange for the California legislature passing, and Governor Jerry Brown signing into law, a similar piece of legislation, with some concessions, by June 28th, the final deadline to withdraw ballot initiatives.  If enacted, the Act would take effect January 1, 2020.

In the “compromise bill”, Assemblyman Ed Chau (D-Arcadia) amended the California Consumer Privacy Act of 2018, (AB 375) to ensure the consumer privacy activists, and conversely ballot initiative opponents, would be comfortable with its terms.

Some of the key consumer rights allotted for in AB 375 include:

  • A consumer’s right to request deletion of personal information which would require the business to delete information upon receipt of a verified request;

  • A consumer’s right to request that a business that sells the consumer’s personal information, or discloses it for a business purpose, disclose the categories of information that it collects and categories of information and the identity of any 3rd parties to which the information was sold or disclosed;

  • A consumer’s right to opt-out of the sale of personal information by a business prohibiting the business from discriminating against the consumer for exercising this right, including a prohibition on charging the consumer who opts-out a different price or providing the consumer a different quality of goods or services, except if the difference is reasonably related to value provided by the consumer’s data.

Covered entities under AB 375 would include, any entity that does business in the State of California and satisfies one or more of the following: (i) annual gross revenue in excess of $25 million, (ii) alone or in combination, annually buys, receives for the business’ commercial purposes, sells, or shares for commercial purposes, alone or in combination, the personal information of 50,000 or more consumers, households, or devices, OR (iii) Derives 50 percent or more of its annual revenues from selling consumers’ personal information.

Though far reaching, the amended AB 375 limits legal damages and provides significant concessions to business opponents of the bill. For example, the bill allows a business 30 days to “cure” any alleged violations prior to the California attorney general initiating legal action. Similarly, while a private action is permissible, a consumer is required to provide a business 30 days written notice before instituting an action, during which time the business has the same 30 days to “cure” any alleged violations.  Specifically, the bill provides: “In the event a cure is possible, if within the 30 days the business actually cures the noticed violation and provides the consumer an express written statement that the violations have been cured and that no further violations shall occur, no action for individual statutory damages or class-wide statutory damages may be initiated against the business.”  Civil penalties for actions brought by the Attorney General are capped at $7,500 for each intentional violation.  The damages in any private action brought by a consumer are not less than one hundred dollars ($100) and not greater than seven hundred and fifty ($750) per consumer per incident or actual damages, whichever is greater.

Overall, consumer privacy advocates are pleased with the amended legislation which is “substantially similar to our initiative”, said Alastair Mactaggart, a San Francisco real estate developer leading the ballot initiative. “It gives more privacy protection in some areas, and less in others.”

The consumer rights allotted for in the amended version of the California Consumer Privacy Act of 2018, are reminiscent of those found in the European Union’s sweeping privacy regulations, the General Data Protection Regulation (“GDPR”) (See Does the GDPR Apply to Your U.S. Based Company?), that took effect May 25th. Moreover, California is not the only United States locality considering far reaching privacy protections. Recently, the Chicago City Council introduced the Personal Data Collection and Protection Ordinance, which, inter alia, would require opt-in consent from Chicago residents to use, disclose or sell their personal information. On the federal level, several legislative proposals are being considered to heighten consumer privacy protection, including the Consumer Privacy Protection Act, and the Data Security and Breach Notification Act.

 

Jackson Lewis P.C. © 2018
This post was written by Joseph J. Lazzarotti of Jackson Lewis P.C.

The Hacked & the Hacker-for-Hire: Lessons from the Yahoo Data Breaches (So Far)

The fallout from the Yahoo data breaches continues to illustrate how cyberattacks thrust companies into the competing roles of crime victim, regulatory enforcement target and civil litigant.

Yahoo, which is now known as Altaba, recently became the first public company to be fined ($35 million) by the Securities and Exchange Commission for filing statements that failed to disclose known data breaches. This is on top of the $80 million federal securities class action settlement that Yahoo reached in March 2018—the first of its kind based on a cyberattack. Shareholder derivative actions remain pending in state courts, and consumer data breach class actions have survived initial motions to dismiss and remain consolidated in California for pre-trial proceedings. At the other end of the spectrum, a federal judge has balked at the U.S. Department of Justice’s (DOJ) request that a hacker-for-hire indicted in the Yahoo attacks be sentenced to eight years in prison for a digital crime spree that dates back to 2010.

The Yahoo Data Breaches

In December 2014, Yahoo’s security team discovered that Russian hackers had obtained its “crown jewels”—the usernames, email addresses, phone numbers, birthdates, passwords and security questions/answers for at least 500 million Yahoo accounts. Within days of the discovery, according to the SEC, “members of Yahoo’s senior management and legal teams received various internal reports from Yahoo’s Chief Information Security Officer (CISO) stating that the theft of hundreds of millions of Yahoo users’ personal data had occurred.” Yahoo’s internal security team thereafter was aware that the same hackers were continuously targeting Yahoo’s user database throughout 2015 and early 2016, and also received reports that Yahoo user credentials were for sale on the dark web.

In the summer of 2016, Yahoo was in negotiations with Verizon to sell its operating business. In response to due diligence questions about its history of data breaches, Yahoo gave Verizon a spreadsheet falsely representing that it was aware of only four minor breaches involving users’ personal information.  In June 2016, a new Yahoo CISO (hired in October 2015) concluded that Yahoo’s entire database, including the personal data of its users, had likely been stolen by nation-state hackers and could be exposed on the dark web in the immediate future. At least one member of Yahoo’s senior management was informed of this conclusion. Yahoo nonetheless failed to disclose this information to Verizon or the investing public. It instead filed the Verizon stock purchase agreement—containing an affirmative misrepresentation as to the non-existence of such breaches—as an exhibit to a July 25, 2016, Form 8-K, announcing the transaction.

On September 22, 2016, Yahoo finally disclosed the 2014 data breach to Verizon and in a press release attached to a Form 8-K.  Yahoo’s disclosure pegged the number of affected Yahoo users at 500 million.

The following day, Yahoo’s stock price dropped by 3%, and it lost $1.3 billion in market capitalization. After Verizon declared the disclosure and data breach a “material adverse event” under the Stock Purchase Agreement, Yahoo agreed to reduce the purchase price by $350 million (a 7.25% reduction in price) and agreed to share liabilities and expenses relating to the breaches going forward.

Since September 2016, Yahoo has twice revised its data breach disclosure.  In December 2016, Yahoo disclosed that hackers had stolen data from 1 billion Yahoo users in August 2013, and had also forged cookies that would allow an intruder to access user accounts without supplying a valid password in 2015 and 2016. On March 1, 2017, Yahoo filed its 2016 Form 10-K, describing the 2014 hacking incident as having been committed by a “state-sponsored actor,” and the August 2013 hacking incident by an “unauthorized third party.”  As to the August 2013 incident, Yahoo stated that “we have not been able to identify the intrusion associated with this theft.” Yahoo disclosed security incident expenses of $16 million ($5 million for forensics and $11 million for lawyers), and flatly stated: “The Company does not have cybersecurity liability insurance.”

The same day, Yahoo’s general counsel resigned as an independent committee of the Yahoo Board received an internal investigation report concluding that “[t]he 2014 Security Incident was not properly investigated and analyzed at the time, and the Company was not adequately advised with respect to the legal and business risks associated with the 2014 Security Incident.” The internal investigation found that “senior executives and relevant legal staff were aware [in late 2014] that a state-sponsored actor had accessed certain user accounts by exploiting the Company’s account management tool.”

The report concluded that “failures in communication, management, inquiry and internal reporting contributed to the lack of proper comprehension and handling of the 2014 Security Incident.” Yahoo’s CEO, Marissa Mayer, also forfeited her annual bonus as a result of the report’s findings.

On September 1, 2017, a California federal judge partially denied Yahoo’s motion to dismiss the data breach class actions. Then, on October 3, 2017, Yahoo disclosed that all of its users (3 billion accounts) had likely been affected by the hacking activity that traces back to August 2013. During a subsequent hearing held in the consumer data breach class action, a Yahoo lawyer stated that the company had confirmed the new totals on October 2, 2017, based on further forensic investigation conducted in September 2017. That forensic investigation was prompted, Yahoo’s counsel said, by recent information obtained from a third party about the scope of the August 2013 breach. As a result of the new disclosures, the federal judge granted the plaintiffs’ request to amend their complaint to add new allegations and causes of action, potentially including fraud claims and requests for punitive damages.

The SEC Breaks New Cybersecurity Ground

Just a month after issuing new interpretive guidance about public company disclosures of cyberattacks (see our Post and Alert), the SEC has now issued its first cease-and-desist order and penalty against a public company for failing to disclose known cyber incidents in its public filings. The SEC’s administrative order alleges that Yahoo violated Sections 17(a)(2) & (3) of the Securities Act of 1933 and Section 13(a) of the Securities Exchange Act of 1934 and related rules when its senior executives discovered a massive data breach in December 2014, but failed to disclose it until after its July 2016 merger announcement with Verizon.

During that two-year window, Yahoo filed a number of reports and statements with the SEC that misled investors about Yahoo’s cybersecurity history. For instance, in its 2014-2016 annual and quarterly reports, the SEC found that Yahoo included risk factor disclosures stating that the company “faced the risk” of potential future data breaches, “without disclosing that a massive data breach had in fact already occurred.”

Yahoo management’s discussion and analysis of financial condition and results of operation (MD&A) was also misleading, because it “omitted known trends and uncertainties with regard to liquidity or net revenue presented by the 2014 breach.” Knowing full well of the massive breach, Yahoo nonetheless filed a July 2016 proxy statement relating to its proposed sale to Verizon that falsely denied knowledge of any such massive breach. It also filed a stock purchase agreement that it knew contained a material misrepresentation as to the non-existence of the data breaches.

Despite being informed of the data breach within days of its discovery, Yahoo’s legal and management team failed to properly investigate the breach and made no effort to disclose it to investors. As the SEC described the deficiency, “Yahoo senior management and relevant legal staff did not properly assess the scope, business impact, or legal implications of the breach, including how and where the breach should have been disclosed in Yahoo’s public filings or whether the fact of the breach rendered, or would render, any statements made by Yahoo in its public filings to be misleading.” Yahoo’s in-house lawyers and management also did not share information with its auditors or outside counsel to assess disclosure obligations in public filings.

In announcing the penalty, SEC officials noted that Yahoo left “its investors totally in the dark about a massive data breach” for two years, and that “public companies should have controls and procedures in place to properly evaluate cyber incidents and disclose material information to investors.” The SEC also noted that Yahoo must cooperate fully with its ongoing investigation, which may lead to penalties against individuals.

The First Hacker Faces Sentencing

Coincidentally, on the same day that the SEC announced its administrative order and penalty against Yahoo, one of the four hackers indicted for the Yahoo cyberattacks (and the only one in U.S. custody) appeared for sentencing before a U.S. District Judge in San Francisco. Karim Baratov, a 23-year-old hacker-for-hire, had been indicted in March 2017 for various computer hacking, economic espionage, and other offenses relating to the 2014 Yahoo intrusion.

His co-defendants, who remain in Russia, are two officers of the Russian Federal Security Service (FSB) and a Russian hacker who has been on the FBI’s Cyber Most Wanted list since November 2013. The indictment alleges that the Russian intelligence officers used criminal hackers to execute the hacks on Yahoo’s systems, and then to exploit some of that stolen information to hack into other accounts held by targeted individuals.

Baratov is the small fish in the group. His role in the hacking conspiracy focused on gaining unauthorized access to non-Yahoo email accounts of individuals of interest identified through the Yahoo data harvest.  Unbeknownst to Baratov, he was doing the bidding of Russian intelligence officers, who did not disclose their identities to the hacker-for-hire. Baratov asked no questions in return for commissions paid on each account he compromised.

In November 2017, Baratov pled guilty to conspiracy to commit computer fraud and aggravated identity theft. He admitted that, between 2010 and 2017, he hacked into the webmail accounts of more than 11,000 victims, stole and sold the information contained in their email accounts, and provided his customers with ongoing access to those accounts. Baratov was indiscriminate in his hacking for hire, even hacking for a customer who appeared to engage in violence against targeted individuals for money. Between 2014 and 2016, he was paid by one of the Russian intelligence officers to hack into at least 80 webmail accounts of individuals of interest to Russian intelligence identified through the 2014 Yahoo incident. Baratov provided his handler with the contents of each account, plus ongoing access to the account.

The government is seeking eight years of imprisonment, arguing that Baratov “stole and provided his customers the keys to break into the private lives of targeted victims.” In particular, the government cites the need to deter Baratov and other hackers from engaging in cybercrime-for-hire operations. The length of the sentence alone suggests that Baratov is not cooperating against other individuals. Baratov’s lawyers have requested a sentence of no more than 45 months, stressing Baratov’s unwitting involvement in the Yahoo attack as a proxy for Russian intelligence officers.

In a somewhat unusual move, the sentencing judge delayed sentencing and asked both parties to submit additional briefing discussing other hacking sentences. The judge expressed concern that the government’s sentencing request was severe and that an eight-year term could create an “unwarranted sentencing disparity” with sentences imposed on other hackers.

The government is going to the mat for Baratov’s victims.  On May 8, 2018, the government fired back in a supplemental sentencing memorandum that reaffirms its recommended sentence of 8 years of imprisonment. The memorandum contains an insightful summary of federal hacking sentences imposed on defendants, with similar records who engaged in similar conduct, between 2008 and 2018. The government surveys various types of hacking cases, from payment card breaches to botnets, banking Trojans and theft and exploitation of intimate images of victims.

The government points to U.S. Sentencing Guidelines Commission data showing that federal courts almost always have imposed sentences within the advisory Guidelines range on hackers who steal personal information and do not earn a government-sponsored sentence reduction (generally due to lack of cooperation in the government’s investigation). The government also expands on the distinctions between different types of hacking conduct and how each should be viewed at sentencing. It focuses on Baratov’s role as an indiscriminate hacker-for-hire, who targeted individuals chosen by his customers for comprehensive data theft and continuous surveillance. Considering all of the available data, the government presents a very persuasive argument that its recommended sentence of eight years of imprisonment is appropriate. Baratov’s lawyers may now respond in writing, and sentencing is scheduled for May 29, 2018.

Lessons from the Yahoo Hacking Incidents and Responses

There are many lessons to be learned from Yahoo’s cyber incident odyssey. Here are some of them:

The Criminal Conduct

  • Cybercrime as a service is growing substantially.

  • Nation-state cyber actors are using criminal hackers as proxies to attack private entities and individuals. In fact, the Yahoo fact pattern shows that the Russian intelligence services have been doing so since at least 2014.

  • Cyber threat actors—from nation-states to lone wolves – are targeting enormous populations of individuals for cyber intrusions, with goals ranging from espionage to data theft/sale, to extortion.

  • User credentials remain hacker gold, providing continued, unauthorized access to online accounts for virtually any targeted victim.

  • Compromises of one online account (such as a Yahoo account) often lead to compromises of other accounts tied to targeted individuals. Credential sharing between accounts and the failure to employ multi-factor authentication makes these compromises very easy to execute.

The Incident Responses

  • It’s not so much about the breach, as it is about the cover up. Yahoo ran into trouble with the SEC, other regulators and civil litigants because it failed to disclose its data breaches in a reasonable amount of time. Yahoo’s post-breach injuries were self-inflicted and could have been largely avoided if it had properly investigated, responded to, and disclosed the breaches in real time.

  • SEC disclosures in particular must account for known incidents that could be viewed as material for securities law purposes.  Speaking in the future tense about potential incidents will no longer be sufficient when a company has actual knowledge of significant cyber incidents.

  • Regulators are laying the foundation for ramped-up enforcement actions with real penalties. Like Uber with its recent FTC settlement, Yahoo received some leniency for being first in terms of the SEC’s administrative order and penalty. The stage is now set and everyone is on notice of the type of conduct that will trigger an enforcement action.

  • Yahoo was roundly applauded for its outstanding cooperation with law enforcement agencies investigating the attacks. These investigations go nowhere without extensive victim involvement. Yahoo stepped up in that regard, and that seems to have helped with the SEC, at least.

  • Lawyers must play a key role in the investigation and response to cyber incidents, and their jobs may depend on it. Cyber incident investigations are among the most complex types of investigations that exist. This is not an area for dabblers and rookies. Organizations need to hire in-house lawyers with actual experience and expertise in cybersecurity and cyber incident investigations.

  • Senior executives need to become competent in handling the crisis of cyber incident response. Yahoo’s senior executives knew of the breaches well before they were disclosed. Why the delay? And who made the decision not to disclose in a timely fashion?

  • The failures of Yahoo’s senior executives illustrate precisely why the board of directors now must play a critical role not just in proactive cybersecurity, but in overseeing the response to any major cyber incident. The board must check senior management when it makes the wrong call on incident disclosure.

The Litigation

  • Securities fraud class actions may fare much better than consumer data breach class actions. The significant stock drop coupled with the clear misrepresentations about the material fact of a massive data breach created a strong securities class action that led to an $80 million settlement.  The lack of financial harm to consumers whose accounts were breached is not a problem for securities fraud plaintiffs.

  • Consumer data breach class actions are more routinely going to reach the discovery phase. The days of early dismissals for lack of standing are disappearing quickly.  This change will make the proper internal investigation into incidents and each step of the response process much more critical.

  • Although the jury is still out on how any particular federal judge will sentence a particular hacker, the data is trending in a very positive direction for victims. At least at the federal level, hacks focused on the exploitation of personal information are being met with stiff sentences in many cases. A hacker’s best hope is to earn government-sponsored sentencing reductions due to extensive cooperation. This trend should encourage hacking victims (organizations and individuals alike) to report these crimes to federal law enforcement and to cooperate in the investigation and prosecution of the cybercriminals who attack them.

  • Even if a particular judge ultimately goes south on a government-requested hacking sentence, the DOJ’s willingness to fight hard for a substantial sentence in cases such as this one sends a strong signal to the private sector that victims will be taken seriously and protected if they work with the law enforcement community to combat significant cybercrime activity.

Copyright © by Ballard Spahr LLP
This post was written by Edward J. McAndrew of Ballard Spahr LLP.

Don’t Gamble with the GDPR

The European Union’s (EU) General Data Protection Regulation (GDPR) goes into effect on May 25, and so do the significant fines against businesses that are not in compliance. Failure to comply carries penalties of up to 4 percent of global annual revenue per violation or $20 million Euros – whichever is highest.

This regulatory rollout is notable for U.S.-based hospitality businesses because the GDPR is not just limited to the EU. Rather, the GDPR applies to any organization, no matter where it has operations, if it offers goods or services to, or monitors the behavior of, EU individuals. It also applies to organizations that process or hold the personal data of EU individuals regardless of the company’s location. In other words, if a hotel markets its goods or services to EU individuals, beyond merely having a website, the GDPR applies.

The personal data at issue includes an individual’s name, address, date of birth, identification number, billing information, and any information that can be used alone or with other data to identify a person.

The risks are particularly high for the U.S. hospitality industry, including casino-resorts, because their businesses trigger GDPR-compliance obligations on numerous fronts. Hotels collect personal data from their guests to reserve rooms, coordinate event tickets, and offer loyalty/reward programs and other targeted incentives. Hotels with onsite casinos also collect and use financial information to set up gaming accounts, to track player win/loss activity, and to comply with federal anti-money laundering “know your customer” regulations.

Privacy Law Lags in the U.S.

Before getting into the details of GDPR, it is important to understand that the concept of privacy in the United States is vastly different from the concept of privacy in the rest of the world. For example, while the United States does not even have a federal law standardizing data breach notification across the country, the EU has had a significant privacy directive, the Data Protection Directive, since 1995. The GDPR is replacing the Directive in an attempt to standardize and improve data protection across the EU member states.

Where’s the Data?

Probably the most difficult part of the GDPR is understanding what data a company has, where it got it, how it is getting it, where it is stored, and with whom it is sharing that data. Depending on the size and geographical sprawl of the company, the data identification and audit process can be quite mind-boggling.

A proper data mapping process will take a micro-approach in determining what information the company has, where the information is located, who has access to the information, how the information is used, and how the information is transferred to any third parties. Once a company fully understands what information it has, why it has it, and what it is doing with it, it can start preparing for the GDPR.

What Does the Compliance Requirement Look Like in Application?

One of the key issues for GDPR-compliance is data subject consent. The concept is easy enough to understand: if a company takes a person’s personal information, it has to fully inform the individual why it is taking the information; what it may do with that information; and, unless a legitimate basis exists, obtain express consent from the individual to collect and use that information.

In terms of what a company has to do to get express consent under the GDPR, it means that a company will have to review and revise (and possibly implement) its internal policies, privacy notices, and vendor contracts to do the following:

  • Inform individuals what data you are collecting and why;

  • Inform individuals how you may use their data;

  • Inform individuals how you may share their data and, in turn, what the entities you shared the data with may do with it; and

  • Provide the individual a clear and concise mechanism to provide express consent for allowing the collection, each use, and transfer of information.

At a functional level, this process entails modifying some internal processes regarding data collection that will allow for express consent. In other words, rather than language such as, “by continuing to stay at this hotel, you consent to the terms of our Privacy Policy,” or “by continuing to use this website, you consent to the terms of our Privacy Policy,” individuals must be given an opportunity not to consent to the collection of their information, e.g., a click-box consent versus an automatically checked box.

The more difficult part regarding consent is that there is no grandfather clause for personal information collected pre-GDPR. This means that companies with personal data subject to the GDPR will no longer be allowed to have or use that information unless the personal information was obtained in line with the consent requirements of the GDPR or the company obtains proper consent for use of the data prior to the GDPR’s effective date of May 25, 2018.

What Are the Other “Lawful Basis” to Collect Data Other Than Consent?

Although consent will provide hotels the largest green light to collect, process, and use personal data, there are other lawful basis that may exist that will allow a hotel the right to collect data. This may include when it is necessary to perform a contract, to comply with legal obligations (such as AML compliance), or when necessary to serve the hotel’s legitimate interests without overriding the interests of the individual. This means that during the internal audit process of a hotel’s personal information collection methods (e.g., online forms, guest check-in forms, loyalty/rewards programs registration form, etc.), each guest question asked should be reviewed to ensure the information requested is either not personal information or that there is a lawful reason for asking for the information. For example, a guest’s arrival and departure date is relevant data for purposes of scheduling; however, a guest’s birthday, other than ensuring the person is of the legal age to consent, is more difficult to justify.

What Other Data Subject Rights Must Be Communicated?

Another significant requirement is the GDPR’s requirement that guests be informed of various other rights they have and how they can exercise them including:

  • The right of access to their personal information;

  • The right to rectify their personal information;

  • The right to erase their personal information (the right to be forgotten);

  • The right to restrict processing of their personal information;

  • The right to object;

  • The right of portability, i.e., to have their data transferred to another entity; and

  • The right not to be included in automated marketing initiatives or profiling.

Not only should these data subject rights be spelled out clearly in all guest-facing privacy notices and consent forms, but those notices/forms should include instructions and contact information informing the individuals how to exercise their rights.

What Is Required with Vendor Contracts?

Third parties are given access to certain data for various reasons, including to process credit card payments, implement loyalty/rewards programs, etc. For a hotel to allow a third party to access personal data, it must enter into a GDPR-compliance Data Processing Agreement (DPA) or revise an existing one so that it is GDPR compliant. This is because downstream processors of information protected by the GDPR must also comply with the GDPR. These processor requirements combined with the controller requirements, i.e., those of the hotel that control the data, require that a controller and processor entered into a written agreement that expressly provides:

  • The subject matter and duration of processing;

  • The nature and purpose of the processing;

  • The type of personal data and categories of data subject;

  • The obligations and rights of the controller;

  • The processor will only act on the written instructions of the controller;

  • The processor will ensure that people processing the data are subject to duty of confidence;

  • That the processor will take appropriate measures to ensure the security of processing;

  • The processor will only engage sub-processors with the prior consent of the controller under a written contract;

  • The processor will assist the controller in providing subject access and allowing data subjects to exercise their rights under the GDPR;

  • The processor will assist the controller in meetings its GDPR obligations in relation to the security of processing, the notification of personal data breaches, and data protection impact assessments;

  • The processor will delete or return all personal data to the controller as required at the end of the contract; and that

  • The processor will submit to audits and inspections to provide the controller with whatever information it needs to ensure that they are both meeting the Article 28 obligations and tell the controller immediately if it is asked to do something infringing the GDPR or other data protection law of the EU or a member state.

Other GDPR Concerns and Key Features

Consent and data portability are not the only thing that hotels and gambling companies need to think about once GDPR becomes a reality. They also need to think about the following issues:

  • Demonstrating compliance. All companies will need to be able to prove they are complying with the GDPR. This means keeping records of issue such as consent.

  • Data protection officer. Most companies that deal with large-scale data processing will need to appoint a data protection officer.

  • Breach reporting. Breaches of data must be reported to authorities within 72 hours and to affected individuals “without undue delay.” This means that hotels will need to have policies and procedures in place to comply with this requirement and, where applicable, ensure that any processors are contractually required to cooperate with the breach-notification process.

© Copyright 2018 Dickinson Wright PLLC
This post was written by Sara H. Jodka of Dickinson Wright PLLC.

GDPR May 25th Deadline Approaching – Businesses Globally Will Feel Impact

In less than four months, the General Data Protection Regulation (the “GDPR” or the “Regulation”) will take effect in the European Union/European Economic Area, giving individuals in the EU/EEA greater control over their personal data and imposing a sweeping set of privacy and data protection rules on data controllers and data processors alike. Failure to comply with the Regulation’s requirements could result in substantial fines of up to the greater of €20 million or 4% of a company’s annual worldwide gross revenues. Although many American companies that do not have a physical presence in the EU/EEA may have been ignoring GDPR compliance based on the mistaken belief that the Regulation’s burdens and obligations do not apply outside of the EU/EEA, they are doing so at their own peril.

A common misconception is that the Regulation only applies to EU/EEA-based corporations or multinational corporations with operations within the EU/EEA. However, the GDPR’s broad reach applies to any company that is offering goods or services to individuals located within the EU/EEA or monitoring the behavior of individuals in the EU/EEA, even if the company is located outside of the European territory. All companies within the GDPR’s ambit also must ensure that their data processors (i.e., vendors and other partners) process all personal data on the companies’ behalf in accordance with the Regulation, and are fully liable for any damage caused by their vendors’ non-compliant processing. Unsurprisingly, companies are using indemnity and insurance clauses in data processing agreements with their vendors to contractually shift any damages caused by non-compliant processing activities back onto the non-compliant processors, even if those vendors are not located in the EU/EEA. As a result, many American organizations that do not have direct operations in the EU/EEA nevertheless will need to comply with the GDPR because they are receiving, storing, using, or otherwise processing personal data on behalf of customers or business partners that are subject to the Regulation and its penalties. Indeed, all companies with a direct or indirect connection to the EU/EEA – including business relationships with entities that are covered by the Regulation – should be assessing the potential implications of the GDPR for their businesses.

Compliance with the Regulation is a substantial undertaking that, for most organizations, necessitates a wide range of changes, including:

  • Implementing “Privacy by Default” and “Privacy by Design”;
  • Maintaining appropriate data security;
  • Notifying European data protection agencies and consumers of data breaches on an expedited basis;
  • Taking responsibility for the security and processing of third-party vendors;
  • Conducting “Data Protection Impact Assessments” on new processing activities;
  • Instituting safeguards for cross-border transfers; and
  • Recordkeeping sufficient to demonstrate compliance on demand.

Failure to comply with the Regulation’s requirements carries significant risk. Most prominently, the GDPR empowers regulators to impose fines for non-compliance of up to the greater of €20 million or 4% of worldwide annual gross revenue. In addition to fines, regulators also may block non-compliant companies from accessing the EU/EEA marketplace through a variety of legal and technological methods. Even setting these potential penalties aside, simply being investigated for a potential GDPR violation will be costly, burdensome and disruptive, since during a pending investigation regulators have the authority to demand records demonstrating a company’s compliance, impose temporary data processing bans, and suspend cross-border data flows.

The impending May 25, 2018 deadline means that there are only a few months left for companies to get their compliance programs in place before regulators begin enforcement. In light of the substantial regulatory penalties and serious contractual implications of non-compliance, any company that could be required to meet the Regulation’s obligations should be assessing their current operations and implementing the necessary controls to ensure that they are processing personal data in a GDPR-compliant manner.

 

© 2018 Neal, Gerber & Eisenberg LLP.
More on the GDPR at the NLR European Union Jurisdiction Page.

Elder Abuse: Are Granny Cams a Solution, a Compliance Burden, or Both?

In Minnesota, 97% of the 25,226 allegations of elder abuse (neglect, physical abuse, unexplained serious injuries and thefts) in state-licensed senior facilities in 2016 were never investigated. This prompted Minnesota Governor, Mark Dayton, to announce plans last week to form a task force to find out why. As one might expect, Minnesota is not alone. A studypublished in 2011 found that an estimated 260,000 (1 in 13) older adults in New York had been victims of one form of abuse or another during a 12-month period between 2008 and 2009, with “a dramatic gap” between elder abuse events reported and the number of cases referred to formal elder abuse services. Clearly, states are struggling to protect a vulnerable and growing group of residents from abuse. Technologies such as hidden cameras may help to address the problem, but their use raises privacy, security, compliance, and other concerns.

With governmental agencies apparently lacking the resources to identify, investigate, and respond to mounting cases of elder abuse in the long-term care services industry, and the number of persons in need of long-term care services on the rise, this problem is likely to get worse before it gets better. According to a 2016 CDC report concerning users of long-term care services, more than 9 million people in the United States receive regulated long-term care services. These numbers are only expected to increase. The Family Caregiver Alliance reports that

by 2050, the number of individuals using paid long-term care services in any setting (e.g., at home, residential care such as assisted living, or skilled nursing facilities) will likely double from the 13 million using services in 2000, to 27 million people.

However, technologies such as hidden cameras are making it easier for families and others to step in and help protect their loved ones. In fact, some states are implementing measures to leverage these technologies to help address the problem of elder abuse. For example, New Jersey’s Attorney General recently expanded the “Safe Care Cam” program which lends cameras and memory cards to Garden State residents who suspect their loved ones may be victims of abuse by an in-home caregiver.

Common known as “granny cams,” these easy-to-hide devices which can record video and sometimes audio are being strategically placed in nursing homes, long-term care, and residential care facilities. For example, the “Charge Cam” (pictured above) is designed to look like and actually function as a plug used to charge smartphone devices. Once plugged in, it is able to record eight hours of video and sound. For a nursing home resident’s family concerned about the treatment of the resident, use of a “Charge Cam” or similar device could be a very helpful way of getting answers to their suspicions of abuse. However, for the unsuspecting nursing home or other residential or long-term care facility, as well as for the well-meaning family members, the use of these devices can pose a number of issues and potential risks. Here are just some questions that should be considered:

  • Is there a state law that specifically addresses “granny cams”? Note that at least five states (Illinois, New Mexico, Oklahoma, Texas, and Washington) have laws specifically addressing the use of cameras in this context. In Illinois, for example, the resident and the resident’s roommate must consent to the camera, and notice must be posted outside the resident’s room to alert those entering the room about the recording.
  • Is consent required from all of the parties to conversations that are recorded by the device?
  • Do the HIPAA privacy and security regulations apply to the video and audio recordings that contain individually identifiable health information of the resident or other residents whose information is captured in the video or audio recorded?
  • How do the features of the device, such as camera placement and zoom capabilities, affect the analysis of the issues raised above?
  • How can the validity of a recording be confirmed?
  • What effects will there be on employee recruiting and employee retention?
  • If the organization permits the device to be installed, what rights and obligations does it have with respect to the scope, content, security, preservation, and other aspects of the recording?

Just as body cameras for police are viewed by some as a way to help address concerns over police brutality allegations, some believe granny cams can serve as a deterrent to abuse of residents at long-term care and similar facilities. However, families and facilities have to consider these technologies carefully.

This post was written by Joseph J. Lazzarotti  of Jackson Lewis P.C. © 2017
For more legal analysis, go to The National Law Review 

Employees Sue for Fingerprint Use

Employees of Peacock Foods, an Illinois-based food product manufacturer, recently filed a lawsuit against their employer for alleged violations of Illinois’ Biometric Information Privacy Act. Under BIPA, companies that collect biometric information must inter alia have a written retention policy (that they follow). As part of the policy, the law states that they must delete biometric information after they no long need it, or three years after the last transaction with the individual. Companies also need consent to collect the information under the Illinois law, cannot sell information, and if shared must get consent for such sharing.

According to the plaintiff-employees, Peacock Foods used their fingerprints for a time tracking system without explaining in writing how the saved fingerprints would be used, or if they would be shared with third parties. According to the employees, this violated BIPA’s requirement for explaining -in the consent request- how information would be used and how long it would be kept. The employees also alleged that Peacock Foods did not have a retention schedule and program in place for deleting biometric data. The employees are currently seeking class certification.

Putting it Into Practice: This case is a reminder that plaintiffs’ class action lawyers are looking at BIPA and possible complaints that can be brought under the law. To address the Illinois law – and similar ones in Texas and Washington – companies should look at the notice and consent process they have in place.   

This post was written by Liisa M. Thomas & Mukund H. Sharma of Sheppard Mullin Richter & Hampton LLP., Copyright © 2017

For more Labor & Employment legal analysis, go to The National Law Review

Recording Conversations with Your Cellphone: with Great Power Comes Potential Legal Liability

In the cellphone age, nearly everyone walks around with a multi-tasking recording device in their pocket or purse, and it comes in handy for many of our modern problems: Your dog suddenly started doing something adorable? Open your video app and start rolling. Need to share that epic burger you just ordered with your foodie friends? There’s an app for that. Want to remember the great plot twist you just thought of for that novel you’ve been working on? Record a voice memo.

Sometimes, though, the need arises to record more serious matters. Many people involved in lawsuits choose to record conversations with their phones, all in the name of preserving evidence that might be relevant in court. People involved in contentious divorce or child custody cases, for example, might try to record a hostile confrontation that occurred during a pickup for visitation. Conversely, others might be worried that an ex-spouse has secretly recorded a conversation and plans to use it against them out of context.

But while everyone has the power to record just about anything with few swipes on their phone, do they have the legal right to do so? If not, what are the possible consequences? Can you even use recorded conversations in court? Consider these important questions before your press record.

Criminal Liability: Can you go to jail just for recording someone’s conversation?

The short answer: Yes. Under Michigan’s Eavesdropping law,[1] it is a felony punishable by up to two years and $2,000 to willfully use any device to eavesdrop on (meaning to overhear, record, amplify, or transmit) a conversation without the consent of all participants in that conversation.[2]It is also a felony for a person to “use or divulge” any information that they know was obtained through illegal eavesdropping.[3]

But there is one important distinction that Michigan courts have recognized: if you are a participant in the conversation, then you do not need permission of other participants to record the conversation (at least not when it comes to the eavesdropping law; there may be other laws that apply, as discussed below).[4] This makes sense given the purposes of the law. The theory is that if you are a participant in the conversation, then other participants at least have a chance to judge your character and determine if you are the kind of person who might relay the conversation to others (either verbally or by making a recording).

The bottom line is that if you use a device, like your cellphone, to record, overhear, amplify, or transmit a conversation that you are not a part of without the permission of all participants, you could face criminal consequences.

Civil Liability: If someone records your private conversation, can you file a lawsuit against them?

The short answer: Yes. The eavesdropping statute allows eavesdropping victims to bring a civil lawsuit against the perpetrator.[5] But the same distinction applies; you cannot sue someone for recording a conversation that they participated in.

Before filing a civil eavesdropping claim, though, consider what if anything there is to gain. The eavesdropping statute permits a judge to issue an injunction prohibiting the perpetrator from further eavesdropping. This may be a valuable remedy if there is a risk that the eavesdropper would otherwise continue eavesdropping on your conversations. The statue also allows a plaintiff to recover actual damages and punitive damages from the wrongdoer. In many cases, actual damages will likely be minimal, and punitive damages are subject to the whims of the judge or jury deciding the case. A result, the cost of litigation may exceed any monetary recovery unless actual damages are significant or the eavesdropper’s conduct was egregious enough to elicit a large punitive award from a jury.

Evidence and Admissibility: Can I use a recorded conversation in court?

Many people are familiar with the exclusionary rule that arises from the Fourth Amendment of the United States Constitution, which provides that if police officers obtain evidence as a result of an illegal search or seizure, then the prosecution is prohibited from using that evidence to support their case. This raises the question:

If a regular civilian obtains evidence by recording a conversation in violation of the eavesdropping statute, is that evidence automatically excluded from court proceedings?

The short answer: No. The exclusionary rule is specifically designed to curb the potentially oppressive power of the government in order to guarantee the protections of the Fourth Amendment, at the expense of excluding potentially valuable evidence from court proceedings. Since the Fourth Amendment only restricts government conduct, the exclusionary rule only applies to evidence obtained as a result of unconstitutional government action. As a result, even if a private citizen breaks the law and records your conversation, that recording is not automatically excluded from court.[6]

So does this mean you can use any recorded conversation in court whenever you want?

The short answer: No. Anything presented in court still needs to comply with the Rules of Evidence, and in many cases recorded conversations will not make the cut. A big reason is the hearsay rule, which says that out of court statements cannot be used to prove the truth of the matter asserted.[7] In other words, you can’t use a recording of your neighbor saying “I use my neighbor’s Wi-Fi” as evidence to prove that he was, in fact, using your Wi-Fi.

But there are many exceptions to the hearsay rule which might allow a recorded conversation into court. Salient among these exceptions is the rule that admissions of a party-opponent are not hearsay.[8] Consequently, if a man records his ex-wife’s conversation with her current husband, the hearsay rule will not prevent the man from using the recording of his ex-wife against her in a child custody case; the ex-wife is a “party-opponent” and her out-of-court statements are not considered hearsay.

Continuing this same example, note that the man’s actions would violate the eavesdropping statute (assuming he didn’t have permission to make the recording) because he was not a participant in the hypothetical conversation. But this violation would not keep the recording out of court. Nevertheless, if a prosecutor wanted to press charges, the man could be subject to criminal liability. And if the ex-wife was so inclined, she could file a civil lawsuit against the man and ask for an injunction and monetary damages.

Other Law: Is the eavesdropping statute the only law you need to worry about before recording all of your conversations?

The short answer: No, don’t hit record just yet. Even if you comply with the eavesdropping statute, there are still other potential pitfalls to be aware of. For instance, wiretapping laws govern the recording and interception of telephone calls and electronic communications, and carry criminal penalties. For inter-state phone calls, the laws of other states will come into play as well. And depending on the means you use to obtain a recording and what you do with the recording once you have it, you risk incurring civil liability for a variety of privacy torts, such as intrusion upon seclusion or public disclosure of private facts.

The safest route is to always get permission from everyone involved before recording a conversation or sharing a recorded conversation with anyone. If that’s not an option, consult with a lawyer who has had an opportunity to consider all of the facts involved in your case.

________________________________

[1] MCL 750.539 et seq.
[2] MCL 750.539a; MCL 570.539c.
[3] MCL 750.539e.
[4] See Sullivan v. Gray, 117 Mich. App. 476, 324 N.W.2d 58, 59 – 61 (1982).
[5] MCL 750.539h.
[6] See, e.g., Swan v. Bob Maxey Lincoln Mercury, No. 216564, 2001 WL 682371, at *2 n3 (Mich. Ct. App. Apr. 24, 2001)
[7] MRE 802.
[8] MRE 801(d)(2).

This post was written by Jeffrey D. Koelzer of  Varnum LLP © 2017
For more legal analysis go to The National Law Review

Employees Celebrate Chip Party: Embedding RFID Chips – Would You Agree to This?

On 1 August 2017, employees of a Wisconsin-based technology company enjoyed a “Chip Party” – but not the salty kind.  21 of Three Square Market’s 85 employees agreed to allow their employer to embed radio frequency identification chips in their bodies. We are familiar with the Internet of Things, is this the Internet of People?

Three Square Market (known as 32M) highlighted the convenience of microchipping their employees, reporting that they will be able to use the RFID chip to make purchases in the company break room, open doors, access copy machines and log in to their computers.

While the “chipped” employees reported that they felt only a brief sting when the chips were inserted, chipping employees draws deeper cuts through ethical and privacy issues.

One such issue is the potential for the technology to gradually encroach with further applications not contemplated by its original purpose. RFID technology has the potential to be used for surveillance and location-tracking purposes, similar to GPS technology. It also has potential to be used as a password or authentication tool, to store health information, access public transport or even as a passport.

While these potential applications will offer convenience to employers and consumers, the value of the information generated by each transaction is arguably greater for the marketers, data brokers and law enforcement entities that use it for their own purposes. Once data like this exists it can be accessed in all manner of circumstances.  Can you ever provide sufficient advice and counselling to employees to create informed consent free from the power imbalance of the employment relationship?

All keen on tech here at K&L Gates, but no one was putting their hand up for a similar program here, we’ll all just use our pass card to open the door, thanks.  We were left brainstorming films that use implants to see where this technology could take us as it is all too common in Sci-Fi films.  Have a look at The Final Cut, 2004 (warning 37% Rotten Tomato rating), where implants took centre stage by storing people’s experiences.  We are not there yet, but we have taken the first wobbly step on the path.

Read more about 32M’s use of RFID chips here.

See here to find out more about tracking employees with other technologies.

Read more legal analysis on the National Law Review.

Olivia Coburn and Cameron Abbott of K&L Gates contributed this article.

Privacy Hat Trick: Three New State Laws to Juggle

Nevada, Oregon and New Jersey recently passed laws focusing on the collection of consumer information, serving as a reminder for advertisers, retailers, publishers and data collectors to keep up-to-date, accurate and compliant privacy and information collection policies.

Nevada: A Website Privacy Notice is Required

Nevada joined California and Delaware in explicitly requiring websites and online services to post an accessible privacy notice. The Nevada law, effective October 1, 2017, requires disclosure of the following:

  • The categories of “covered information” collected about consumers who visit the website or online service;

  • The categories of third parties with whom the operator may share such information;

  • A description of the process, if any, through which consumers may review and request changes to their information;

  • A description of the process by which operators will notify consumers of material changes to the notice;

  • Whether a third party may collect covered information about the consumer’s online activities over time and across different Internet websites or online services; and

  • The effective date of the notice.

“Covered Information” is defined to include a consumer’s name, address, email address, telephone number, social security number, an identifier that allows a specific person to be contacted physically or online, and any other information concerning a person maintained by the operator in combination with an identifier.

Takeaway: Website and online service operators (including Ad Techs and other data collectors) should review their privacy policies to ensure they are disclosing all collection of information that identifies, can be used to contact, or that is combined with information that identifies consumers. Website operators should also be sure that they are aware of, and are properly disclosing, any information that is shared with or collected by their third-party service providers and how that information is used.

Oregon: Misrepresentation of Privacy Practices = Unlawful Trade Practice.

Oregon expanded its definition of an “unlawful trade practice”, effective January 1, 2018, to expressly include using, disclosing, collecting, maintaining, deleting or disposing of information in a manner materially inconsistent with any statement or representation published on a business’s website or in a consumer agreement related to a consumer transaction.The new Oregon law is broader than other similar state laws, which limit their application to “personal information”. Oregon’s law, which does not define “information”, could apply to misrepresentations about any information collection practices, even if not related to consumer personal information.

Takeaway: Businesses should be mindful when drafting privacy policies, terms of use, sweepstakes and contest rules and other consumer-facing policies and statements not to misrepresent their practices with respect to any information collected, not just personal information.

New Jersey: ID Cards Can Only be Scanned for Limited Purposes (not Advertising)

New Jersey’s new Personal Information and Privacy Protection Act, effective October 1, 2017, limits the purposes for which a retail establishment may scan a person’s identification card to the following:

  • To verify the authenticity of the card or the identity of the person paying for goods or services with a method other than cash, returning an item or requesting a refund or exchange;

  • To verify the person’s age when providing age-restricted goods or services to the person;

  • To prevent fraud or other criminal activity using a fraud prevention service company or system if the person returns an item or requests a refund or exchange;

  • To prevent fraud or other criminal activity related to a credit transaction to open or manage a credit account;

  • To establish or maintain a contractual relationship;

  • To record, retain, or transmit information required by State or federal law;

  • To transmit information to a consumer reporting agency, financial institution, or debt collector to be used as permitted by the Fair Credit Reporting Act and the Fair Debt Collection Practices Act; or

  • To record, retain, or transmit information governed by the medical privacy and security rules of the Health Insurance Portability and Accountability Act.

The law also prohibits the retention of information scanned from an identification card for verification purposes and specifically prohibits the sharing of information scanned from an identification card with a third party for marketing, advertising or promotional activities, or any other purpose not specified above. The law does make an exception to permit a retailer’s automated return fraud system to share ID information with a third party for purposes of issuing a reward coupon to a loyal customer.

Takeaway: Retail establishments with locations in New Jersey should review their point-of-sale practices to ensure they are not scanning ID cards for marketing, advertising, promotional or any other purposes not permitted by the New Jersey law.

Read more legal analysis at the National Law Review.

This post was written byJulie Erin Rubash of  Sheppard Mullin Richter & Hampton LLP.

Weapons in the Cyber Defense Arsenal

In May 2017, the world experienced an unprecedented global cyberattack that targeted the public and private sectors, including an auto factory in France, dozens of hospitals and health care facilities in the United Kingdom, gas stations in China and banks in Russia. This is just the tip of the iceberg and more attacks are certain to follow. As this experience shows, companies of all sizes, across all industries, in every country are vulnerable to cyberattacks that can have devastating consequences for their businesses and operations.

The Malware Families

Exploiting vulnerabilities in Microsoft® software, hackers launched a widespread ransomware attack targeting hundreds of thousands of companies worldwide. The vector, “WannaCry” malware, encrypts electronic files and locks them until released by the hacker after a ransom is paid in untraceable Bitcoin. The malware also has the ability to spread to all other computer systems on a network. On the heels of WannaCry, a new attack called “Adylkuzz” is crippling computers by diverting their processing power.

The most prevalent types of ransomware found in 2016 were Cerber and Locky. Microsoft detected Cerber, used in spam campaigns, in more than 600,000 computers and observed that it was one of the most profitable of 2016. Spread via malicious spam emails that have an executable virus file, Cerber has gained increasing popularity due to its Ransomware-as-a-Service (RaaS) business model, which enables less sophisticated hackers to lease the malware.

data security privacy FCC cybersecurityCheck Point Software indicated that Locky was the second most prevalent piece of malware worldwide in November 2016.  Microsoft detected Locky in more than 500,000 computers in 2016. First discovered in February 2016, Locky is typically delivered via an email attachment (including Microsoft Office documents and compressed attachments) in phishing campaigns designed to entice unsuspecting individuals to click on the attachment. Of course, as the most recent global attacks demonstrate, hackers are devising and deploying new variants of ransomware with different capabilities all the time.

The Rise of Ransomware Attacks

The rise in ransomware attacks is directly related to the ease with which it is deployed and the quick return for the attackers. The U.S. Department of Justice has reported that there was an average of more than 4,000 ransomware attacks daily in 2016, a 300 percent increase over the prior year. Some experts believe that ransomware may be one of the most profitable cybercrime tactics in history, earning approximately $1 billion in 2016. Worse yet, even with the ransom paid, some data already may have been compromised or may never be recovered.

The risk is even greater if your ransom-encrypted data contains protected health information (PHI). In July 2016, the U.S. Department of Health and Human Services, Office of Civil Rights (HHS/OCR) advised that the encryption or permanent loss of PHI would trigger HIPAA’s Breach Notification Rule for the affected population, unless a low probability that the recovered PHI had been compromised could be demonstrated. This means a mandated investigation to confirm the likelihood that the PHI was not accessed or otherwise compromised.

Ransomware Statistics

According to security products and solutions provider Symantec Corporation, ransomware was the most dangerous cybercrime threat facing consumers and businesses in 2016:

  • The majority of 2016 ransomware infections happened in consumer computers, at 69 percent, with enterprises at 31 percent.

  • The average ransom demanded in 2016 rose to $1,077, up from $294 in 2015.

  • There was a 36 percent increase in ransomware infections from 340,665 in 2015 to 463,841 in 2016.

  • The number of ransomware “families” found totaled 101 in 2016, triple the 30 found in 2015.

  • The biggest event of 2016 was the beginning of RaaS, or the development of malware packages that can be sold to attackers in return for a percentage of the profits.

  • Since January 1, 2016, more than 4,000 ransomware attacks have occurred − a 300 percent increase over the 1,000 daily attacks seen in 2015.

  • In the second half of 2016, the percentage of recognized ransomware attacks from all malware attacks globally doubled from 5.5 percent to 10.5 percent.

The Best Defense Is a Good Offense

While no perfectly secure computer system exists, companies can take precautionary measures to increase their preparedness and reduce their exposure to potentially crippling cyberattacks. While Microsoft no longer supports Windows XP operating systems, which were hit the hardest by WannaCry, Microsoft has made an emergency patch available to protect against WannaCry. However, those still using Windows XP should upgrade all devices to a more current operating system that is still fully supported by Microsoft to ensure protection against emerging threats. Currently, that means upgrading to Windows 7, Windows 8 or Windows 10.

Even current, supported software needs to be updated when prompted by the computer. Those who delay installing updates may find themselves at risk. Microsoft issued a patch for supported operating systems in March 2017 to protect against the vulnerability that WannaCry exploited. Needless to say, many companies did not bother to patch their systems in a timely manner.

Ransomware creates even greater business disruption when a company does not have secure backups of files that are critical to key business functions and operations. It also is important for companies to back up files frequently, because a stale backup that is several months old or older may not be particularly useful. Companies also should make certain that their antivirus and anti-malware software is current to protect against emerging threats.

In addition, companies need to train their employees on detecting and mitigating potential cyber threats. Employees are frequently a company’s first line of defense against many forms of routine cyberattacks that originate from seemingly innocuous emails, attachments and links from unknown sources. Indeed, many cyberattacks can be avoided if employees are simply trained not to click on suspicious links or attachments that could surreptitiously install malware.

Last but not least, companies should consider purchasing cyber liability insurance coverage, which is readily available. While cyber policies are still evolving and there are no standardized policy forms, coverage can be purchased at varying price points with different levels of coverage. Some of the more comprehensive forms of coverage provide additional “bells and whistles” such as immediate access to preapproved professionals that can guide companies through the legal and technical web of cybersecurity events and incident response.

Other cyber policies afford bundled coverages that may include:

  • The costs of a forensics investigation to identify the source and scope of an incident

  • Notification to affected individuals

  • Remediation in the form of credit monitoring and identity theft restoration services

  • Costs to restore lost, stolen or corrupted data and computer equipment

  • Defense of third-party claims and regulatory investigations arising out of a cyberattack.

 

This post was written by Anjali C. Das, Kevin M. Scott and John Busch of Wilson Elser Moskowitz Edelman & Dicker LLP.data security privacy FCC cybersecurity