The Hacked & the Hacker-for-Hire: Lessons from the Yahoo Data Breaches (So Far)

The fallout from the Yahoo data breaches continues to illustrate how cyberattacks thrust companies into the competing roles of crime victim, regulatory enforcement target and civil litigant.

Yahoo, which is now known as Altaba, recently became the first public company to be fined ($35 million) by the Securities and Exchange Commission for filing statements that failed to disclose known data breaches. This is on top of the $80 million federal securities class action settlement that Yahoo reached in March 2018—the first of its kind based on a cyberattack. Shareholder derivative actions remain pending in state courts, and consumer data breach class actions have survived initial motions to dismiss and remain consolidated in California for pre-trial proceedings. At the other end of the spectrum, a federal judge has balked at the U.S. Department of Justice’s (DOJ) request that a hacker-for-hire indicted in the Yahoo attacks be sentenced to eight years in prison for a digital crime spree that dates back to 2010.

The Yahoo Data Breaches

In December 2014, Yahoo’s security team discovered that Russian hackers had obtained its “crown jewels”—the usernames, email addresses, phone numbers, birthdates, passwords and security questions/answers for at least 500 million Yahoo accounts. Within days of the discovery, according to the SEC, “members of Yahoo’s senior management and legal teams received various internal reports from Yahoo’s Chief Information Security Officer (CISO) stating that the theft of hundreds of millions of Yahoo users’ personal data had occurred.” Yahoo’s internal security team thereafter was aware that the same hackers were continuously targeting Yahoo’s user database throughout 2015 and early 2016, and also received reports that Yahoo user credentials were for sale on the dark web.

In the summer of 2016, Yahoo was in negotiations with Verizon to sell its operating business. In response to due diligence questions about its history of data breaches, Yahoo gave Verizon a spreadsheet falsely representing that it was aware of only four minor breaches involving users’ personal information.  In June 2016, a new Yahoo CISO (hired in October 2015) concluded that Yahoo’s entire database, including the personal data of its users, had likely been stolen by nation-state hackers and could be exposed on the dark web in the immediate future. At least one member of Yahoo’s senior management was informed of this conclusion. Yahoo nonetheless failed to disclose this information to Verizon or the investing public. It instead filed the Verizon stock purchase agreement—containing an affirmative misrepresentation as to the non-existence of such breaches—as an exhibit to a July 25, 2016, Form 8-K, announcing the transaction.

On September 22, 2016, Yahoo finally disclosed the 2014 data breach to Verizon and in a press release attached to a Form 8-K.  Yahoo’s disclosure pegged the number of affected Yahoo users at 500 million.

The following day, Yahoo’s stock price dropped by 3%, and it lost $1.3 billion in market capitalization. After Verizon declared the disclosure and data breach a “material adverse event” under the Stock Purchase Agreement, Yahoo agreed to reduce the purchase price by $350 million (a 7.25% reduction in price) and agreed to share liabilities and expenses relating to the breaches going forward.

Since September 2016, Yahoo has twice revised its data breach disclosure.  In December 2016, Yahoo disclosed that hackers had stolen data from 1 billion Yahoo users in August 2013, and had also forged cookies that would allow an intruder to access user accounts without supplying a valid password in 2015 and 2016. On March 1, 2017, Yahoo filed its 2016 Form 10-K, describing the 2014 hacking incident as having been committed by a “state-sponsored actor,” and the August 2013 hacking incident by an “unauthorized third party.”  As to the August 2013 incident, Yahoo stated that “we have not been able to identify the intrusion associated with this theft.” Yahoo disclosed security incident expenses of $16 million ($5 million for forensics and $11 million for lawyers), and flatly stated: “The Company does not have cybersecurity liability insurance.”

The same day, Yahoo’s general counsel resigned as an independent committee of the Yahoo Board received an internal investigation report concluding that “[t]he 2014 Security Incident was not properly investigated and analyzed at the time, and the Company was not adequately advised with respect to the legal and business risks associated with the 2014 Security Incident.” The internal investigation found that “senior executives and relevant legal staff were aware [in late 2014] that a state-sponsored actor had accessed certain user accounts by exploiting the Company’s account management tool.”

The report concluded that “failures in communication, management, inquiry and internal reporting contributed to the lack of proper comprehension and handling of the 2014 Security Incident.” Yahoo’s CEO, Marissa Mayer, also forfeited her annual bonus as a result of the report’s findings.

On September 1, 2017, a California federal judge partially denied Yahoo’s motion to dismiss the data breach class actions. Then, on October 3, 2017, Yahoo disclosed that all of its users (3 billion accounts) had likely been affected by the hacking activity that traces back to August 2013. During a subsequent hearing held in the consumer data breach class action, a Yahoo lawyer stated that the company had confirmed the new totals on October 2, 2017, based on further forensic investigation conducted in September 2017. That forensic investigation was prompted, Yahoo’s counsel said, by recent information obtained from a third party about the scope of the August 2013 breach. As a result of the new disclosures, the federal judge granted the plaintiffs’ request to amend their complaint to add new allegations and causes of action, potentially including fraud claims and requests for punitive damages.

The SEC Breaks New Cybersecurity Ground

Just a month after issuing new interpretive guidance about public company disclosures of cyberattacks (see our Post and Alert), the SEC has now issued its first cease-and-desist order and penalty against a public company for failing to disclose known cyber incidents in its public filings. The SEC’s administrative order alleges that Yahoo violated Sections 17(a)(2) & (3) of the Securities Act of 1933 and Section 13(a) of the Securities Exchange Act of 1934 and related rules when its senior executives discovered a massive data breach in December 2014, but failed to disclose it until after its July 2016 merger announcement with Verizon.

During that two-year window, Yahoo filed a number of reports and statements with the SEC that misled investors about Yahoo’s cybersecurity history. For instance, in its 2014-2016 annual and quarterly reports, the SEC found that Yahoo included risk factor disclosures stating that the company “faced the risk” of potential future data breaches, “without disclosing that a massive data breach had in fact already occurred.”

Yahoo management’s discussion and analysis of financial condition and results of operation (MD&A) was also misleading, because it “omitted known trends and uncertainties with regard to liquidity or net revenue presented by the 2014 breach.” Knowing full well of the massive breach, Yahoo nonetheless filed a July 2016 proxy statement relating to its proposed sale to Verizon that falsely denied knowledge of any such massive breach. It also filed a stock purchase agreement that it knew contained a material misrepresentation as to the non-existence of the data breaches.

Despite being informed of the data breach within days of its discovery, Yahoo’s legal and management team failed to properly investigate the breach and made no effort to disclose it to investors. As the SEC described the deficiency, “Yahoo senior management and relevant legal staff did not properly assess the scope, business impact, or legal implications of the breach, including how and where the breach should have been disclosed in Yahoo’s public filings or whether the fact of the breach rendered, or would render, any statements made by Yahoo in its public filings to be misleading.” Yahoo’s in-house lawyers and management also did not share information with its auditors or outside counsel to assess disclosure obligations in public filings.

In announcing the penalty, SEC officials noted that Yahoo left “its investors totally in the dark about a massive data breach” for two years, and that “public companies should have controls and procedures in place to properly evaluate cyber incidents and disclose material information to investors.” The SEC also noted that Yahoo must cooperate fully with its ongoing investigation, which may lead to penalties against individuals.

The First Hacker Faces Sentencing

Coincidentally, on the same day that the SEC announced its administrative order and penalty against Yahoo, one of the four hackers indicted for the Yahoo cyberattacks (and the only one in U.S. custody) appeared for sentencing before a U.S. District Judge in San Francisco. Karim Baratov, a 23-year-old hacker-for-hire, had been indicted in March 2017 for various computer hacking, economic espionage, and other offenses relating to the 2014 Yahoo intrusion.

His co-defendants, who remain in Russia, are two officers of the Russian Federal Security Service (FSB) and a Russian hacker who has been on the FBI’s Cyber Most Wanted list since November 2013. The indictment alleges that the Russian intelligence officers used criminal hackers to execute the hacks on Yahoo’s systems, and then to exploit some of that stolen information to hack into other accounts held by targeted individuals.

Baratov is the small fish in the group. His role in the hacking conspiracy focused on gaining unauthorized access to non-Yahoo email accounts of individuals of interest identified through the Yahoo data harvest.  Unbeknownst to Baratov, he was doing the bidding of Russian intelligence officers, who did not disclose their identities to the hacker-for-hire. Baratov asked no questions in return for commissions paid on each account he compromised.

In November 2017, Baratov pled guilty to conspiracy to commit computer fraud and aggravated identity theft. He admitted that, between 2010 and 2017, he hacked into the webmail accounts of more than 11,000 victims, stole and sold the information contained in their email accounts, and provided his customers with ongoing access to those accounts. Baratov was indiscriminate in his hacking for hire, even hacking for a customer who appeared to engage in violence against targeted individuals for money. Between 2014 and 2016, he was paid by one of the Russian intelligence officers to hack into at least 80 webmail accounts of individuals of interest to Russian intelligence identified through the 2014 Yahoo incident. Baratov provided his handler with the contents of each account, plus ongoing access to the account.

The government is seeking eight years of imprisonment, arguing that Baratov “stole and provided his customers the keys to break into the private lives of targeted victims.” In particular, the government cites the need to deter Baratov and other hackers from engaging in cybercrime-for-hire operations. The length of the sentence alone suggests that Baratov is not cooperating against other individuals. Baratov’s lawyers have requested a sentence of no more than 45 months, stressing Baratov’s unwitting involvement in the Yahoo attack as a proxy for Russian intelligence officers.

In a somewhat unusual move, the sentencing judge delayed sentencing and asked both parties to submit additional briefing discussing other hacking sentences. The judge expressed concern that the government’s sentencing request was severe and that an eight-year term could create an “unwarranted sentencing disparity” with sentences imposed on other hackers.

The government is going to the mat for Baratov’s victims.  On May 8, 2018, the government fired back in a supplemental sentencing memorandum that reaffirms its recommended sentence of 8 years of imprisonment. The memorandum contains an insightful summary of federal hacking sentences imposed on defendants, with similar records who engaged in similar conduct, between 2008 and 2018. The government surveys various types of hacking cases, from payment card breaches to botnets, banking Trojans and theft and exploitation of intimate images of victims.

The government points to U.S. Sentencing Guidelines Commission data showing that federal courts almost always have imposed sentences within the advisory Guidelines range on hackers who steal personal information and do not earn a government-sponsored sentence reduction (generally due to lack of cooperation in the government’s investigation). The government also expands on the distinctions between different types of hacking conduct and how each should be viewed at sentencing. It focuses on Baratov’s role as an indiscriminate hacker-for-hire, who targeted individuals chosen by his customers for comprehensive data theft and continuous surveillance. Considering all of the available data, the government presents a very persuasive argument that its recommended sentence of eight years of imprisonment is appropriate. Baratov’s lawyers may now respond in writing, and sentencing is scheduled for May 29, 2018.

Lessons from the Yahoo Hacking Incidents and Responses

There are many lessons to be learned from Yahoo’s cyber incident odyssey. Here are some of them:

The Criminal Conduct

  • Cybercrime as a service is growing substantially.

  • Nation-state cyber actors are using criminal hackers as proxies to attack private entities and individuals. In fact, the Yahoo fact pattern shows that the Russian intelligence services have been doing so since at least 2014.

  • Cyber threat actors—from nation-states to lone wolves – are targeting enormous populations of individuals for cyber intrusions, with goals ranging from espionage to data theft/sale, to extortion.

  • User credentials remain hacker gold, providing continued, unauthorized access to online accounts for virtually any targeted victim.

  • Compromises of one online account (such as a Yahoo account) often lead to compromises of other accounts tied to targeted individuals. Credential sharing between accounts and the failure to employ multi-factor authentication makes these compromises very easy to execute.

The Incident Responses

  • It’s not so much about the breach, as it is about the cover up. Yahoo ran into trouble with the SEC, other regulators and civil litigants because it failed to disclose its data breaches in a reasonable amount of time. Yahoo’s post-breach injuries were self-inflicted and could have been largely avoided if it had properly investigated, responded to, and disclosed the breaches in real time.

  • SEC disclosures in particular must account for known incidents that could be viewed as material for securities law purposes.  Speaking in the future tense about potential incidents will no longer be sufficient when a company has actual knowledge of significant cyber incidents.

  • Regulators are laying the foundation for ramped-up enforcement actions with real penalties. Like Uber with its recent FTC settlement, Yahoo received some leniency for being first in terms of the SEC’s administrative order and penalty. The stage is now set and everyone is on notice of the type of conduct that will trigger an enforcement action.

  • Yahoo was roundly applauded for its outstanding cooperation with law enforcement agencies investigating the attacks. These investigations go nowhere without extensive victim involvement. Yahoo stepped up in that regard, and that seems to have helped with the SEC, at least.

  • Lawyers must play a key role in the investigation and response to cyber incidents, and their jobs may depend on it. Cyber incident investigations are among the most complex types of investigations that exist. This is not an area for dabblers and rookies. Organizations need to hire in-house lawyers with actual experience and expertise in cybersecurity and cyber incident investigations.

  • Senior executives need to become competent in handling the crisis of cyber incident response. Yahoo’s senior executives knew of the breaches well before they were disclosed. Why the delay? And who made the decision not to disclose in a timely fashion?

  • The failures of Yahoo’s senior executives illustrate precisely why the board of directors now must play a critical role not just in proactive cybersecurity, but in overseeing the response to any major cyber incident. The board must check senior management when it makes the wrong call on incident disclosure.

The Litigation

  • Securities fraud class actions may fare much better than consumer data breach class actions. The significant stock drop coupled with the clear misrepresentations about the material fact of a massive data breach created a strong securities class action that led to an $80 million settlement.  The lack of financial harm to consumers whose accounts were breached is not a problem for securities fraud plaintiffs.

  • Consumer data breach class actions are more routinely going to reach the discovery phase. The days of early dismissals for lack of standing are disappearing quickly.  This change will make the proper internal investigation into incidents and each step of the response process much more critical.

  • Although the jury is still out on how any particular federal judge will sentence a particular hacker, the data is trending in a very positive direction for victims. At least at the federal level, hacks focused on the exploitation of personal information are being met with stiff sentences in many cases. A hacker’s best hope is to earn government-sponsored sentencing reductions due to extensive cooperation. This trend should encourage hacking victims (organizations and individuals alike) to report these crimes to federal law enforcement and to cooperate in the investigation and prosecution of the cybercriminals who attack them.

  • Even if a particular judge ultimately goes south on a government-requested hacking sentence, the DOJ’s willingness to fight hard for a substantial sentence in cases such as this one sends a strong signal to the private sector that victims will be taken seriously and protected if they work with the law enforcement community to combat significant cybercrime activity.

Copyright © by Ballard Spahr LLP
This post was written by Edward J. McAndrew of Ballard Spahr LLP.

Don’t Gamble with the GDPR

The European Union’s (EU) General Data Protection Regulation (GDPR) goes into effect on May 25, and so do the significant fines against businesses that are not in compliance. Failure to comply carries penalties of up to 4 percent of global annual revenue per violation or $20 million Euros – whichever is highest.

This regulatory rollout is notable for U.S.-based hospitality businesses because the GDPR is not just limited to the EU. Rather, the GDPR applies to any organization, no matter where it has operations, if it offers goods or services to, or monitors the behavior of, EU individuals. It also applies to organizations that process or hold the personal data of EU individuals regardless of the company’s location. In other words, if a hotel markets its goods or services to EU individuals, beyond merely having a website, the GDPR applies.

The personal data at issue includes an individual’s name, address, date of birth, identification number, billing information, and any information that can be used alone or with other data to identify a person.

The risks are particularly high for the U.S. hospitality industry, including casino-resorts, because their businesses trigger GDPR-compliance obligations on numerous fronts. Hotels collect personal data from their guests to reserve rooms, coordinate event tickets, and offer loyalty/reward programs and other targeted incentives. Hotels with onsite casinos also collect and use financial information to set up gaming accounts, to track player win/loss activity, and to comply with federal anti-money laundering “know your customer” regulations.

Privacy Law Lags in the U.S.

Before getting into the details of GDPR, it is important to understand that the concept of privacy in the United States is vastly different from the concept of privacy in the rest of the world. For example, while the United States does not even have a federal law standardizing data breach notification across the country, the EU has had a significant privacy directive, the Data Protection Directive, since 1995. The GDPR is replacing the Directive in an attempt to standardize and improve data protection across the EU member states.

Where’s the Data?

Probably the most difficult part of the GDPR is understanding what data a company has, where it got it, how it is getting it, where it is stored, and with whom it is sharing that data. Depending on the size and geographical sprawl of the company, the data identification and audit process can be quite mind-boggling.

A proper data mapping process will take a micro-approach in determining what information the company has, where the information is located, who has access to the information, how the information is used, and how the information is transferred to any third parties. Once a company fully understands what information it has, why it has it, and what it is doing with it, it can start preparing for the GDPR.

What Does the Compliance Requirement Look Like in Application?

One of the key issues for GDPR-compliance is data subject consent. The concept is easy enough to understand: if a company takes a person’s personal information, it has to fully inform the individual why it is taking the information; what it may do with that information; and, unless a legitimate basis exists, obtain express consent from the individual to collect and use that information.

In terms of what a company has to do to get express consent under the GDPR, it means that a company will have to review and revise (and possibly implement) its internal policies, privacy notices, and vendor contracts to do the following:

  • Inform individuals what data you are collecting and why;

  • Inform individuals how you may use their data;

  • Inform individuals how you may share their data and, in turn, what the entities you shared the data with may do with it; and

  • Provide the individual a clear and concise mechanism to provide express consent for allowing the collection, each use, and transfer of information.

At a functional level, this process entails modifying some internal processes regarding data collection that will allow for express consent. In other words, rather than language such as, “by continuing to stay at this hotel, you consent to the terms of our Privacy Policy,” or “by continuing to use this website, you consent to the terms of our Privacy Policy,” individuals must be given an opportunity not to consent to the collection of their information, e.g., a click-box consent versus an automatically checked box.

The more difficult part regarding consent is that there is no grandfather clause for personal information collected pre-GDPR. This means that companies with personal data subject to the GDPR will no longer be allowed to have or use that information unless the personal information was obtained in line with the consent requirements of the GDPR or the company obtains proper consent for use of the data prior to the GDPR’s effective date of May 25, 2018.

What Are the Other “Lawful Basis” to Collect Data Other Than Consent?

Although consent will provide hotels the largest green light to collect, process, and use personal data, there are other lawful basis that may exist that will allow a hotel the right to collect data. This may include when it is necessary to perform a contract, to comply with legal obligations (such as AML compliance), or when necessary to serve the hotel’s legitimate interests without overriding the interests of the individual. This means that during the internal audit process of a hotel’s personal information collection methods (e.g., online forms, guest check-in forms, loyalty/rewards programs registration form, etc.), each guest question asked should be reviewed to ensure the information requested is either not personal information or that there is a lawful reason for asking for the information. For example, a guest’s arrival and departure date is relevant data for purposes of scheduling; however, a guest’s birthday, other than ensuring the person is of the legal age to consent, is more difficult to justify.

What Other Data Subject Rights Must Be Communicated?

Another significant requirement is the GDPR’s requirement that guests be informed of various other rights they have and how they can exercise them including:

  • The right of access to their personal information;

  • The right to rectify their personal information;

  • The right to erase their personal information (the right to be forgotten);

  • The right to restrict processing of their personal information;

  • The right to object;

  • The right of portability, i.e., to have their data transferred to another entity; and

  • The right not to be included in automated marketing initiatives or profiling.

Not only should these data subject rights be spelled out clearly in all guest-facing privacy notices and consent forms, but those notices/forms should include instructions and contact information informing the individuals how to exercise their rights.

What Is Required with Vendor Contracts?

Third parties are given access to certain data for various reasons, including to process credit card payments, implement loyalty/rewards programs, etc. For a hotel to allow a third party to access personal data, it must enter into a GDPR-compliance Data Processing Agreement (DPA) or revise an existing one so that it is GDPR compliant. This is because downstream processors of information protected by the GDPR must also comply with the GDPR. These processor requirements combined with the controller requirements, i.e., those of the hotel that control the data, require that a controller and processor entered into a written agreement that expressly provides:

  • The subject matter and duration of processing;

  • The nature and purpose of the processing;

  • The type of personal data and categories of data subject;

  • The obligations and rights of the controller;

  • The processor will only act on the written instructions of the controller;

  • The processor will ensure that people processing the data are subject to duty of confidence;

  • That the processor will take appropriate measures to ensure the security of processing;

  • The processor will only engage sub-processors with the prior consent of the controller under a written contract;

  • The processor will assist the controller in providing subject access and allowing data subjects to exercise their rights under the GDPR;

  • The processor will assist the controller in meetings its GDPR obligations in relation to the security of processing, the notification of personal data breaches, and data protection impact assessments;

  • The processor will delete or return all personal data to the controller as required at the end of the contract; and that

  • The processor will submit to audits and inspections to provide the controller with whatever information it needs to ensure that they are both meeting the Article 28 obligations and tell the controller immediately if it is asked to do something infringing the GDPR or other data protection law of the EU or a member state.

Other GDPR Concerns and Key Features

Consent and data portability are not the only thing that hotels and gambling companies need to think about once GDPR becomes a reality. They also need to think about the following issues:

  • Demonstrating compliance. All companies will need to be able to prove they are complying with the GDPR. This means keeping records of issue such as consent.

  • Data protection officer. Most companies that deal with large-scale data processing will need to appoint a data protection officer.

  • Breach reporting. Breaches of data must be reported to authorities within 72 hours and to affected individuals “without undue delay.” This means that hotels will need to have policies and procedures in place to comply with this requirement and, where applicable, ensure that any processors are contractually required to cooperate with the breach-notification process.

© Copyright 2018 Dickinson Wright PLLC
This post was written by Sara H. Jodka of Dickinson Wright PLLC.

SEC Issues Updated Disclosure Guidance on Cybersecurity

On February 21, 2018, the U.S. Securities and Exchange Commission (“SEC”) issued updated interpretative guidance to assist public companies in preparing disclosures about cybersecurity risks and incidents. The updated guidance reinforces and expands upon the prior guidance on cybersecurity disclosures issued by the SEC’s Division of Corporation Finance in October 2011. In addition to highlighting the disclosure requirements under the federal securities laws that public companies must pay particular attention to when considering their disclosure obligations with respect to cybersecurity risks and incidents, the updated guidance (1) emphasizes the importance of maintaining comprehensive policies and procedures related to cybersecurity risks and incidents, and (2) discusses the application of insider trading prohibitions and Regulation FD and selective disclosure prohibitions in the cybersecurity context. The guidance specifically notes that the SEC continues to monitor cybersecurity disclosures carefully through its filing review process.

Cybersecurity-Related Disclosures

Timely Disclosure of Material Nonpublic Information

In determining disclosure obligations regarding cybersecurity risks and incidents, companies should analyze the potential materiality of any identified risk and, in the case of incidents, the importance of any compromised information and the impact of the incident on the company’s operations. When assessing the materiality of cybersecurity risks or incidents, the SEC notes that the following factors, among others, should be considered:

  • Nature, extent, and potential magnitude (particularly as it relates to any compromised information or the business and scope of company operations), and
  • Range of possible harm, including harm to the company’s reputation, financial performance, customer and vendor relationships, and possible litigation or regulatory investigations (both foreign and domestic).

When companies become aware of a cybersecurity incident or risk that would be material to investors, the SEC expects companies to disclose such information in a timely manner and sufficiently prior to the offer and sale of securities. In addition, steps should be taken to prevent directors and officers (and other corporate insiders aware of such information) from trading in the company’s securities until investors have been appropriately informed about the incident or risk. Importantly, the SEC states that an ongoing internal or external investigation regarding a cybersecurity incident “would not on its own provide a basis for avoiding disclosure of a material cybersecurity incident.”

Risk Factors

In evaluating cybersecurity risk factor disclosure, the guidance encourages companies to consider the following:

  • the occurrence of prior cybersecurity incidents, including severity and frequency;
  • the probability of the occurrence and potential magnitude of cybersecurity incidents;
  • the adequacy of preventative actions taken to reduce cybersecurity risks and the associated costs, including, if appropriate, discussing the limits of the company’s ability to prevent or mitigate certain cybersecurity risks;
  • the aspects of the company’s business and operations that give rise to material cybersecurity risks and the potential costs and consequences of such risks, including industry-specific risks and third party supplier and service provider risks;
  • the costs associated with maintaining cybersecurity protections, including, if applicable, insurance coverage relating to cybersecurity incidents or payments to service providers;
  • the potential for reputational harm;
  • existing or pending laws and regulations that may affect the requirements to which companies are subject relating to cybersecurity and the associated costs to companies; and
  • litigation, regulatory investigation, and remediation costs associated with cybersecurity incidents.

The guidance also notes that effective communication of cybersecurity risks may require disclosure of previous or ongoing cybersecurity incidents, including incidents involving suppliers, customers, competitors and others.

MD&A of Financial Condition and Results of Operations

The guidance reminds companies that MD&A disclosure of cybersecurity matters may be necessary if the costs or other consequences associated with such matters represent a material event, trend or uncertainty that is reasonably likely to have a material effect on the company’s operations, liquidity or financial condition or would cause reported financial information not to be necessarily indicative of future results. Among other matters, the cost of ongoing cybersecurity efforts (including enhancements to existing efforts), the costs and other consequences of cybersecurity incidents, and the risks of potential cybersecurity incidents could inform a company’s MD&A analysis. In addition to the immediate costs incurred in connection with a cybersecurity incident, companies should also consider costs associated with:

  • loss of intellectual property;
  • implementing preventative measures;
  • maintaining insurance;
  • responding to litigation and regulatory investigations;
  • preparing for and complying with proposed or current legislation;
  • remediation efforts; and
  • addressing harm to reputation and the loss of competitive advantage.

The guidance further notes that the impact of cybersecurity incidents on each reportable segment should also be considered.

Business and Legal Proceedings

Companies are reminded that disclosure may be called for in the (1) Business section of a company’s SEC filings if cybersecurity incidents or risks materially affect a company’s products, services, relationships with customers or suppliers, or competitive conditions, and (2) Legal Proceedings section if a cybersecurity incident results in material litigation against the company.

Financial Statement Disclosures

The SEC expects that a company’s financial reporting and control systems would be designed to provide reasonable assurance that information about the range and magnitude of the financial impacts of a cybersecurity incident would be incorporated into its financial statements on a timely basis as the information becomes available. The guidance provides the following examples of ways that cybersecurity incidents and risks may impact a company’s financial statements:

  • expenses related to investigation, breach notification, remediation and litigation, including the costs of legal and other professional services;
  • loss of revenue, providing customers with incentives or a loss of customer relationship assets value;
  • claims related to warranties, breach of contract, product recall/replacement, indemnification of counterparties, and insurance premium increases; and
  • diminished future cash flows, impairment of intellectual, intangible or other assets; recognition of liabilities; or increased financing costs.

Board Risk Oversight

The securities laws require a company to disclose the extent of its board of directors’ role in the risk oversight of the company, including how the board administers its oversight function and the effect this has on the board’s leadership structure. To the extent cybersecurity risks are material to a company’s business, the disclosure should include the nature of the board’s role in overseeing management of that risk.

Cybersecurity-Related Policies and Procedures

Disclosure Controls and Procedures

The guidance encourages companies to adopt comprehensive policies and procedures related to cybersecurity and to regularly assess their compliance. Companies should evaluate whether they have sufficient disclosure controls and procedures in place to ensure that relevant information about cybersecurity risks and incidents is processed and reported to the appropriate personnel to enable senior management to make disclosure decisions and certifications and to facilitate policies and procedures designed to prohibit directors, officers, and other corporate insiders from trading on the basis of material nonpublic information about cybersecurity risks and incidents. Controls and procedures should enable companies to identify cybersecurity risks and incidents, assess and analyze their impact on a company’s business, evaluate the significance associated with such risks and incidents, provide for open communications between technical experts and disclosure advisors, and make timely disclosures regarding such risks and incidents.

The certifications and disclosures regarding the design and effectiveness of a company’s disclosure controls and procedures should take into account the adequacy of controls and procedures for identifying cybersecurity risks and incidents and for assessing and analyzing their impact. In addition, to the extent cybersecurity risks or incidents pose a risk to a company’s ability to record, process, summarize, and report information that is required to be disclosed in filings, management should consider whether there are deficiencies in disclosure controls and procedures that would render them ineffective.

Insider Trading

Companies and their directors, officers, and other corporate insiders should be mindful of compliance with insider trading laws in connection with information about cybersecurity risks and incidents, including vulnerabilities and breaches. The guidance urges companies to consider how their code of ethics and insider trading policies take into account and prevent trading on the basis of material nonpublic information related to cybersecurity risks and incidents. Specifically, the guidance suggests that as part of the overall investigation and assessment during significant cybersecurity incidents, companies should consider whether and when it may be appropriate to implement restrictions on insiders trading in their securities to avoid the appearance of improper trading during the period following a cybersecurity incident and prior to the dissemination of disclosure.

Regulation FD and Selective Disclosure

Companies are expected to have policies and procedures in place to ensure that any disclosures of material nonpublic information related to cybersecurity risks and incidents are not made selectively, and that any Regulation FD required public disclosure is made simultaneously (in the case of an intentional disclosure) or promptly (in the case of a non-intentional disclosure) and is otherwise compliant with the requirements of Regulation FD.

 

© 2018 Jones Walker LLP
This post was written by Monique A. Cenac and Brett Beter of Jones Walker LLP.

NIST Releases Updated Draft of Cybersecurity Framework

On December 5, 2017, the National Institute of Standards and Technology (“NIST”) announced the publication of a second draft of a proposed update to the Framework for Improving Critical Infrastructure Cybersecurity (“Cybersecurity Framework”), Version 1.1, Draft 2. NIST has also published an updated draft Roadmap to the Cybersecurity Framework, which “details public and private sector efforts related to and supportive of [the] Framework.”

Updates to the Cybersecurity Framework

The second draft of Version 1.1 is largely consistent with Version 1.0. Indeed, the second draft was explicitly designed to maintain compatibility with Version 1.0 so that current users of the Cybersecurity Framework are able to implement the Version 1.1 “with minimal or no disruption.” Nevertheless, there are notable changes between the second draft of Version 1.1 and Version 1.0, which include:

Increased emphasis that the Cybersecurity Framework is intended for broad application across all industry sectors and types of organizations. Although the Cybersecurity Framework was originally developed to improve cybersecurity risk management in critical infrastructure sectors, the revisions note that the Cybersecurity Framework “can be used by organizations in any sector or community” and is intended to be useful to companies, government agencies, and nonprofits, “regardless of their focus or size.” As with Version 1.0, users of the Cybersecurity Framework Version 1.1 are “encouraged to customize the Framework to maximize individual organizational value.” This update is consistent with previous updatesto NIST’s other publications, which indicate that NIST is attempting to broaden the focus and encourage use of its cybersecurity guidelines by state, local, and tribal governments, as well as private sector organizations.

An explicit acknowledgement of a broader range of cybersecurity threats. As with Version 1.0, NIST intended the Cybersecurity Framework to be technology-neutral. This revision explicitly notes that the Cybersecurity Framework can be used by all organizations, “whether their cybersecurity focus is primarily on information technology (“IT”), cyber-physical systems (“CPS”) or connected devices more generally, including the Internet of Things (“IoT”). This change is also consistent with previous updates to NIST’s other publications, which have recently been amended to recognize that cybersecurity risk impacts many different types of systems.

Augmented focus on cybersecurity management of the supply chain. The revised draft expanded section 3.3 to emphasize the importance of assessing the cybersecurity risks up and down supply chains. NIST explains that cyber supply chain risk management (“SCRM”) should address both “the cybersecurity effect an organization has on external parties and the cybersecurity effect external parties have on an organization.” The revised draft incorporates these activities into the Cybersecurity Framework Implementation Tiers, which generally categorize organizations based on the maturity of their cybersecurity programs and awareness. For example, organizations in Tier 1, with the least mature or “partial” awareness, are “generally unaware” of the cyber supply chain risks of products and services, while organizations in Tier 4 use “real-time or near real-time information to understand and consistently act upon” cyber supply chain risks and communicate proactively “to develop and maintain strong supply chain relationships.” The revised draft emphasizes that all organizations should consider cyber SCRM when managing cybersecurity risks.

Increased emphasis on cybersecurity measures and metrics. NIST added a new section 4.0 to the Cybersecurity Framework that highlights the benefits of self-assessing cybersecurity risk based on meaningful measurement criteria, and emphasizes “the correlation of business results to cybersecurity risk management.” According to the draft, “metrics” can “facilitate decision making and improve performance and accountability.” For example, an organization can have standards for system availability and this measurement can be used at a metric for developing appropriate safeguards to evaluate delivery of services under the Framework’s Protect Function. This revision is consistent with the recently-released NIST Special Publication 800-171A, discussed in a previous blog post, which explains the types of cybersecurity assessments that can be used to evaluate compliance with the security controls of NIST Special Publication 800-171.

Future Developments to the Cybersecurity Framework

NIST is soliciting public comments on the draft Cybersecurity Framework and Roadmap no later than Friday, January 19, 2018. Comments can be emailed to cyberframework@nist.gov.

NIST intends to publish a final Cybersecurity Framework Version 1.1 in early calendar year 2018.

 

© 2017 Covington & Burling LLP
This post was written by Susan B. Cassidy and Moriah Daugherty of Covington & Burling LLP.
 

Can They Really Do That?

Effective October 18, 2017, the U.S. Department of Homeland Security (DHS), U.S. Citizenship & Immigration Services (USCIS), Immigration & Customs Enforcement (ICE), Customs & Border Protection (CBP), Index, and National File Tracking System of Records, implemented new or modified uses of information maintained on individuals as they pass through the immigration process.

The new regulation updates the categories of individuals covered, to include: individuals acting as legal guardians or designated representatives in immigration proceedings involving an individual who is physically or developmentally disabled or severely mentally impaired (when authorized); Civil Surgeons who conduct and certify medical examinations for immigration benefits; law enforcement officers who certify a benefit requestor’s cooperation in the investigation or prosecution of a criminal activity; and interpreters.

It also expands the categories of records to include: country of nationality; country of residence; the USCIS Online Account Number; social media handles, aliases, associated identifiable information, and search results; and EOIR and BIA proceedings information.

The new regulation also includes updated record source categories to include: publicly available information obtained from the internet; public records; public institutions; interviewees; commercial data providers; and information With this latest expansion of data allowed to be collected, it begs the question: How does one protect sensitive data housed on electronic devices? In addition to inspecting all persons, baggage and merchandise at a port-of-entry, CBP does indeed have the authority to search electronic devices too. CBP’s stance is that consent is not required for such a search. This position is supported by the U.S. Supreme Court, which has determined that such border searches constitute reasonable searches; and therefore, do not run afoul of the Fourth Amendment.

Despite this broad license afforded CBP at the port-of-entry, CBP’s authority is checked somewhat in that such searches do not include information located solely in the cloud. Information subject to search must be physically stored on the device in order to be accessible at the port-of-entry. Additionally, examination of attorney-client privileged communications contained on electronic devices first requires CBP’s consultation with Associate/Assistant Chief Counsel of the U.S. Attorney’s Office.

So what may one do to prevent seizure of an electronic device or avoid disclosure of confidential data to CBP during a border search? The New York and Canadian Bar Associations have compiled the following recommendations:

  • Consider carrying a temporary or travel laptop cleansed of sensitive local documents and information. Access data through a VPN connection or cloud-based warehousing.
  • Consider carrying temporary mobile devices stripped of contacts and other confidential information. Have calls forwarded from your office number to the unpublished mobile number when traveling.
  • Back up data and shut down your electronic device well before reaching the inspection area to eliminate access to Random Access Memory.

  • Use an alternate account to hold sensitive information. Apply strong encryption and complex passwords.

  • Partition and encrypt the hard drive.

  • Protect the data port.
  • Clean your electronic device(s) following return.
  • Wipe smartphones remotely.

This post was written by Jennifer Cory of Womble Bond Dickinson (US) LLP All Rights Reserved.,Copyright © 2017
For more Immigration legal analysis, go to The National Law Review

The Law of Unintended Consequences: BIPA and the Effects of the Illinois Class Action Epidemic on Employers

Has your company recently beefed up its employee identification and access security and added biometric identifiers, such as fingerprints, facial recognition, or retina scans? Have you implemented new timekeeping technology utilizing biometric identifiers like fingerprints or palm prints in lieu of punch clocks? All of these developments provide an extra measure of security control beyond key cards which can be lost or stolen, and can help to control a time-keeping fraud practice known as “buddy punching.” If you have operations and employees in Illinois (or if you utilize biometrics such as voice scans to authenticate customers located in Illinois), your risk and liability could have increased with the adoption of such biometric technology, so read on ….

What’s the Issue in Illinois?

The collection of biometric identifiers is not generally regulated either by the federal government or the states. There are some exceptions, however. Back in 2008, Illinois passed the first biometric privacy law in the United States. The Biometric Information Privacy Act, known as “BIPA,” makes it unlawful for private entities to collect, store, or use biometric information, such as retina/iris scans, voice scans, face scans, or fingerprints, without first obtaining individual consent for such activities. BIPA also requires that covered entities take specific precautions to secure the information. BIPA also carries statutory penalties for every individual violation that can multiply quickly … and the lawsuits against employers have been coming by the dozens over the past few months.

The Requirements of BIPA

Among other requirements, under BIPA, any “private entity” — including employers — collecting, storing, or using the biometric information of any individual in Illinois – no matter how it is collected, stored or used, or for what reason – must:

  1. Provide each individual with written notice that his/her biometric information will be collected and stored, including an explanation of the purpose for collecting the information as well as the length of time it will be stored and/or used.
  2. Obtain the subject’s express written authorization to collect and store his/her biometric information, prior to that information being collected.
  3. Develop and make available to the public a written policy establishing a retention schedule and guidelines for destroying the biometric information, which shall include destruction of the information when the reason for collection has been satisfied or three years after the company’s last interaction with the individual, whichever occurs first.

Also, any such information collected may not be disclosed to or shared with third parties without the prior consent of the individual.The Money Issue

Under the law, plaintiffs may recover statutory damages of $1,000 for eachnegligent violation and $5,000 per intentional or reckless violation, plus attorneys’ fees and other relief deemed appropriate by the court. Moreover, if actual damages exceed liquidated damages, then a plaintiff is entitled under the Act to pursue actual damages in lieu of liquidated damages.

These damage calculations are made and awarded under BIPA on an individual basis. Do the math: If an employer has 100 employees in Illinois and has allegedly been negligent in obtaining required BIPA consent from employees, this can be a potential exposure of an employer to $500,000 in penalties, before you add in the ability to recover attorneys’ fees.

Who is Getting Sued?

The list of companies sued under BIPA spans industries. The initial groups of defendants included companies such as Facebook, Shutterfly, Google, Six Flags, and Snapchat. Also, a chain of tanning salons and a chain of fitness centers were each sued for using biometric technology to identify members. Between July and October, nearly 26 class-action lawsuits were filed in Illinois state court by current and former employees alleging their employers had violated the BIPA. Companies range from supermarket chains, a gas station and convenience store chain, a chain of senior living facilities, several restaurant groups, and a chain of daycare facilities.

Facts vary from case to case, but nearly all of the recent employee BIPA cases implicate fingerprint or palm-print time-keeping technologies that collect biometric data to to clock employees’ work hours. The plaintiffs allege their employers failed to inform employees about the companies’ policies for use, storage and ultimate destruction of the fingerprint data or obtain the employees’ written consent before collecting, using or storing the individual biometric information.

In at least one case, the employee has also alleged fingerprint data was improperly shared with the supplier of the time-tracking machines, and has named that supplier as a defendant as well (Howe v. Speedway LLC, No. 2017-CH-11992 (Ill. Cir. Ct. filed Sept. 1, 2017)).

What Do I Do Now?

In order to avoid becoming the next target, employers with operations and employees in Illinois should ask some basic questions and review processes and procedures:

  1. First question to ask: are we collecting, storing or using individual biometric data for any purpose?
  2. If the answer is yes, has your company issued the required notice and received signed releases/consents from all affected individuals? This release/consent should be obtained at the commencement of employment before any collection of individual biometric data begins. Do you have a publicaly available written policy to cover the collection, storage, use and destruction of the data? The employee handbook is the most logical place for this policy.
  3. Review your processes: (a) make sure that any collected data is not being sold or disclosed to third parties, outside of the limited exceptions permitted by the Act, and this includes vendors and third party suppliers of biometric technology who process and store the information in a cloud-based service, and (b) make sure that you evaluate your internal data privacy protocols and processes for protecting this new data set, and be prepared to prove that you have “reasonably sufficient” security measures in place for the individual biometric data.
  4. Review your vendor processes: If a vendor has access to the individual biometric data (such as a software-as-a-service provider), make sure the vendor has sufficient data privacy protocols and processes in place and that you have representations regarding this protection from the vendor.
  5. Review insurance coverage for this type of exposure with your broker.
  6. Remember the data breach issues: Make sure your data breach policies recognize that individual biometric data is considered personal information under Illinois laws addressing data breach notification requirements.

This post was authored by Cynthia J. Larose of © Mintz, Levin, Cohn, Ferris, Glovsky and Popeo, P.C. For more Labor & Employment legal analysis, go to The National Law Review

So…Everyone’s Been Compromised? What To Do In The Wake of the Equifax Breach

By now, you’ve probably heard that over 143 million records containing highly sensitive personal information have been compromised in the Equifax data breach. With numbers exceeding 40% of the population of the United States at risk, chances are good that you or someone you know – or more precisely, many people you know – will be affected. But until you know for certain, you are probably wondering what to do until you find out.

To be sure, there has been a lot of confusion. Many feel there was an unreasonable delay in reporting the breach. And now that it has been reported, some have suggested that people who sign up with the Equifax website to determine if they were in the breach might be bound to an arbitration clause and thereby waive their right to file suit if necessary later (although Equifax has since said that is not the case). Others have reported that the “personal identification number” (PIN) provided by Equifax for those who do register with the site is nothing more than a date and time stamp, which could be subject to a brute-force attack, which is not necessarily reassuring when dealing with personal information. Still others have reported that the site itself is subject to vulnerabilities such as cross-site scripting (XSS), which could give hackers another mechanism to steal personal information. And some have even questioned the validity of the responses provided by Equifax when people query to see if they might have been impacted.

In all the chaos, it’s hard to know how to best proceed. Fortunately, you have options other than using Equifax’s website.

1. Place a Credit Freeze

Know that if you are a victim of the breach, you will be notified by Equifax eventually. In the meantime, consider placing a credit freeze on your accounts with the three major credit reporting bureaus. All three major credit reporting bureaus allow consumers to freeze their credit reports for a small fee, and you will need to place a freeze with each credit bureau. If you are the victim of identity fraud, or if your state’s law mandates, a credit freeze can be implemented without charge. In some states, you may incur a small fee. Lists of fees for residents of various states can be found at the TransUnionExperian, and Equifax websites. Placing a freeze on your credit reports will restrict access to your information and make it more difficult for identity thieves to open accounts in your name. This will not affect your credit score but there may be a second fee associated with lifting a credit freeze, so it is important to research your options before proceeding. Also, know that you will likely face a delay period before a freeze can be lifted, so spur-of-the-moment credit opportunities might suffer.

Here is information for freezing your credit with each credit bureau:

Equifax Credit Freeze

  • You may do a credit freeze online or by certified mail (return receipt requested) to:

            Equifax Security Freeze

            P.O. Box 105788

            Atlanta, GA 30348

  • To unfreeze, you must do a temporary thaw by regular mail, online or by calling 1-800-685-1111 (for New York residents call 1-800-349-9960).

Experian Credit Freeze

  • You may do a credit freeze online, by calling 1-888-EXPERIAN (1-888-397-3742) or by certified mail (return receipt requested) to:

            Experian

            P.O. Box 9554

            Allen, TX 75013

  • To unfreeze, you must do a temporary thaw online or by calling 1-888-397-3742.

TransUnion Credit Freeze

  • You may do a credit freeze online, by phone (1-888-909-8872) or by certified mail (return receipt requested) to:

            TransUnion LLC

            P.O. Box 2000

            Chester, PA 19016

  • To unfreeze, you must do a temporary thaw online or by calling 1-888-909-8872.

After you complete a freeze, make sure you have a pen and paper handy because you will be given a PIN code to keep in a safe place.

2. Obtain a Free Copy of Your Credit Report

Consider setting up a schedule to obtain a copy of your free annual credit report from each of the reporting bureaus on a staggered basis. By obtaining and reviewing a report from one of the credit reporting bureaus every three or four months, you can better position yourself to respond to unusual or fraudulent activity more frequently. Admittedly, there is a chance that one of the reporting bureaus might miss an account that is reported by the other two but the benefit offsets the risk.

3. Notify Law Enforcement and Obtain a Police Report

If you find you are the victim of identity fraud (that is, actual fraudulent activity – not just being a member of the class of affected persons), notify your local law enforcement agency to file a police report. Having a police report will help you to challenge fraudulent activity, will provide you with verification of the fraud to provide to credit companies’ fraud investigators, and will be beneficial if future fraud occurs. To that end, be aware that additional fraud may arise closer to the federal tax filing deadline and having a police report already on file can help you resolve identity fraud problems with the Internal Revenue Service if false tax returns are filed under your identity.

4. Obtain an IRS IP PIN

Given the nature of the information involved in the breach, an additional option for individuals residing in Florida, Georgia, and Washington, D.C. is to obtain an IRS IP PIN, which is a 6-digit number assigned to eligible taxpayers to help prevent the misuse of Social Security numbers in federal tax filings. An IP PIN helps the IRS verify a taxpayer’s identity and accept their electronic or paper tax return. When a taxpayer has an IP PIN, it prevents someone else from filing a tax return with the taxpayer’s SSN.

If a return is e-filed with a taxpayer’s SSN and an incorrect or missing IP PIN, the IRS’s system will reject it until the taxpayer submits it with the correct IP PIN or the taxpayer files on paper. If the same conditions occur on a paper filed return, the IRS will delay its processing and any refund the taxpayer may be due for the taxpayer’s protection while the IRS determines if it is truly the taxpayer’s.

Information regarding eligibility for an IRS IP PIN and instructions is available here and to access the IRS’s FAQs on the issue, please go here.

Conclusion

Clearly, the Equifax breach raises many issues about which many individuals need to be concerned – and the pathway forward is uncertain at the moment. But by being proactive, being cautious, and taking appropriate remedial measures available to everyone, you can better position yourself to avoid fraud, protect your rights, and mitigate future fraud that might arise.

 This post was written by Justin L. Root Sara H. Jodka of Dickinson Wright PLLC © Copyright 2017
For more legal news go to The National Law Review

Equifax Breach Affects 143M: If GDPR Were in Effect, What Would Be the Impact?

The security breach announced by Equifax Inc. on September 7, 2017, grabbed headlines around the world as Equifax revealed that personal data of roughly 143 million consumers in the United States and certain UK and Canadian residents had been compromised. By exploiting a website application vulnerability, hackers gained access to certain information such as names, Social Security numbers, birth dates, addresses, and in some instances, driver’s license numbers and credit card numbers. While this latest breach will force consumers to remain vigilant about monitoring unauthorized use of personal information and cause companies to revisit security practices and protocols, had this event occurred under the Global Data Protection Regulation (GDPR) (set to take effect May 25, 2018), the implications would be significant. This security event should serve as a sobering wake up call to multinational organizations and any other organization collecting, processing, storing, or transmitting personal data of EU citizens of the protocols they must have in place to respond to security breaches under GDPR requirements.

Data Breach Notification Obligations

Notification obligations for security breaches that affect U.S. residents are governed by a patchwork set of state laws. The timing of the notification varies from state to state with some requiring that notification be made in the “most expeditious time possible,” while others set forth a specific timeframe such as within 30, 45, or 60 days. The United States does not currently have a federal law setting forth notification requirements, although one was proposed by the government in 2015 setting a 30-day deadline, but the law never received any support.

While the majority of the affected individuals appear to be U.S. residents, Equifax stated that some Canadian and UK residents were also affected. Given Equifax’s statement, the notification obligations under GDPR would apply, even post-Brexit, as evidenced by a recent statement of intent maintaining that the United Kingdom will adopt the GDPR once it leaves the EU. Under the GDPR, in the event of a personal data breach, data controllers must notify the supervisory authority “without undue delay and, where feasible, not later than 72 hours after having become aware of it.” If notification is not made within 72 hours, the controller must provide a “reasoned justification” for the delay. A notification to the authority must at least: 1) describe the nature of the personal data breach, including the number and categories of data subject and personal data records affected, 2) provide the data protection officer’s contact information, 3) describe the likely consequences of the personal data breach, and 4) describe how the controller proposes to address the breach, including any mitigation efforts. If it is not possible to provide the information at the same time, the information may be provided in phases “without undue further delay.”

According to Equifax’s notification to individuals, it learned of the event on July 29, 2017. If GDPR were in effect, notification would have been required much earlier than September 7, 2017. Non-compliance with the notification requirements could lead to an administrative fine of up to 10 million Euros or up to two percent of the total worldwide annual turnover.

Preparing for Breach Obligations Under GDPR

With a security breach of this magnitude, it is easy to imagine the difficulties organizations will face in mobilizing an incident response plan in time to meet the 72-hour notice under GDPR. However, there are still nearly eight months until GDPR goes into effect on May 25, 2018. Now is a good time for organizations to implement, test, retest, and validate the policies and procedures they have in place for incident response and ensure that employees are aware of their roles and responsibilities in the event of a breach. Organizations should consider all of the following in crafting a GDPR incident response readiness plan:

plan, GDPR, incident response

This post was written by Julia K. Kadish and Aaron K. Tantleff of Foley & Lardner LLP © 2017
For more legal analysis got to The National Law Review

SEC Observations from Recent Cybersecurity Examinations Identify Best Practices

The SEC continues to focus on cybersecurity as an area of concern within the investment management industry.

On August 7, the US Securities and Exchange Commission’s (SEC’s) Office of Compliance Inspections and Examinations (OCIE) released a Risk Alert summarizing its observations from a recent cybersecurity-related examination of 75 firms—including broker-dealers, investment advisers, and investment companies (“funds”) registered with the SEC.

The SEC staff has made it clear that cybersecurity remains a high priority and is likely to be an area of continued scrutiny with the potential for enforcement actions. During a recent interview,[1] the SEC’s co-directors of Enforcement, Stephanie Avakian and Steven Peikin, stated their belief that “[t]he greatest threat to our markets right now is the cyber threat.” This pronouncement follows on the heels of OCIE’s identification of cybersecurity as one of its examination priorities for 2017,[2] OCIE’s release of a Risk Alert on the “WannaCry” ransomware virus,[3] and several significant Regulation S-P enforcement actions involving firms that failed to adequately protect customer information.[4]

This LawFlash details OCIE’s observations from its recent cybersecurity-related examination that were discussed in its Risk Alert.

OCIE’s Examination Identifies Common Issues

OCIE staff observed common issues in a majority of the firms and funds subject to examination. These common issues include the following:

  • Failure to reasonably tailor policies and procedures. Specifically, the examination found issues with policies and procedures that

    • incorporated only general guidance;

    • identified limited examples of safeguards for employees to consider; and

    • did not articulate specific procedures to implement policies.

  • Failure to adhere to or enforce policies and procedures. In some cases, policies and procedures were confusing or did not reflect a firm’s actual practices, including in the following areas:

    • Annual customer protection reviews not actually conducted on an annual basis

    • Policies providing for ongoing reviews to determine whether supplemental security protocols were appropriate performed only annually, or not at all

    • Policies and procedures creating contradictory or confusing instructions for employees[5]

    • Firms not appearing to adequately ensure that cybersecurity awareness training was provided and/or failing to take action where employees did not complete required cybersecurity training

  • Regulation S-P issues among firms that did not appear to adequately conduct system maintenance. Because Regulation S-P was enacted to safeguard the privacy of customer information, OCIE observed that issues arose where firms failed to install software patches to address security vulnerabilities and other operational safeguards to protect customer records and information.

  • Failure to fully remediate some of the high-risk observations that firms discovered when they conducted penetration tests and vulnerability scans.

Cyber Best Practices and Other Observations

OCIE identified elements of what it viewed as “robust” cybersecurity policies and procedures from its examinations. Such elements should be considered as best practices and instructive for broker-dealers, investment advisers, and funds in implementing, assessing, and/or enhancing existing cybersecurity-related policies and procedures. Such elements are as follows:

  • Maintenance of data, information, and vendor inventory, including risk classifications

  • Detailed cybersecurity-related instructions, including instructions related to penetration tests, access rights, and reporting guidelines for lost, stolen, or unintentionally disclosed sensitive information

  • Maintenance of prescriptive schedules and processes for testing data integrity and vulnerabilities, including patch management policies

  • Access controls for data and systems

  • Mandatory employee training upon onboarding and periodically thereafter

  • Engaged senior management

OCIE staff noted an overall improvement in firms’ awareness of cyber-­related risks and the implementation of certain cybersecurity practices since its previous Cybersecurity 1 Initiative.[6] Most notably, all broker-dealers, all funds, and nearly all investment advisers in the more recent examinations maintain written policies and procedures related to cybersecurity that address the protection of customer/shareholder records and information. This finding is in contrast to the Cybersecurity 1 Initiative, where OCIE found that comparatively fewer broker-dealers and investment advisers had adopted this type of written policies and procedures.

OCIE staff also noted the following:

  • Nearly all broker-dealers and many investment advisers and funds conducted periodic risk assessments, penetration tests, and vulnerability scans.

  • All broker-dealers and nearly all investment advisers and funds had a process in place for ensuring regular system maintenance.

  • All firms utilized some form of system, utility, or tool to prevent, detect, and monitor data loss as it relates to personally identifiable information.

  • All broker-dealers and a majority of investment advisers and funds maintained cybersecurity organizational charts and/or identified and described cybersecurity roles and responsibilities for the firms’ workforces.

  • Almost all firms either conducted vendor risk assessments or required that vendors provide the firms with risk management and performance reports (i.e., internal and/or external audit reports) and security reviews or certification reports.

  • Information protection programs at the firms typically included relevant cyber-related policies and procedures as well as incident response plans.

Key Takeaways

SEC-registered broker-dealers, investment advisers, and funds should evaluate their policies and procedures to determine whether there are gaps or areas that could be improved based on OCIE’s articulation of best practices. Firms and funds should further evaluate their policies and procedures to ensure that they reflect actual practices and are reasonably tailored to the particular firm’s business. As OCIE notes, effective cybersecurity requires a tailored and risk-based approach to safeguard information and systems.[7]

This post was written by Mark L. Krotoski,  Merri Jo Gillette , Sarah V. Riddell Martin Hirschprung and  Jennifer L. Klass of Morgan, Lewis & Bockius LLP.

Read more legal analysis at The National Law Review.


[1] Sarah Lynch, Exclusive: New SEC Enforcement Chiefs See Cyber Crime as Biggest Market Threat, Reuters.com (Jun. 8, 2017).

[2] OCIE, Examination Priorities for 2017 (Jan. 12, 2017).

[3] National Exam Program Risk Alert, Cybersecurity: Ransomware Alert (May 17, 2017).

[4] In re Morgan Stanley Smith Barney LLC, Exchange Act Release No. 78021, Advisers Act Release No. 4415 (Jun. 8, 2016); In re R.T. Jones Capital Equities Management Inc., Advisers Act Release No. 4204 (Sept. 22, 2015); and In re Craig Scott Capital LLC, Exchange Act Release No. 77595 (Apr. 12, 2016).

[5] OCIE provides an example of confusing policies regarding remote customer access that appeared to be inconsistent with those for investor fund transfers, making it unclear to employees whether certain activity was permissible based on the policies.

[6] See, e.g., OCIE Cybersecurity Initiative (Apr. 15, 2014); see also National Exam Program Risk Alert, Cybersecurity Examination Sweep Summary (Feb. 3, 2015).

[7] For example, the National Institute of Standards and Technology Cybersecurity Framework 1.0 (Feb. 12, 2014) provides a useful flexible approach to assess and manage cybersecurity risk.

Third-Party Aspects of Cybersecurity Protections: Beyond your reach but within your control

Data privacy and cybersecurity issues are ongoing concerns for companies in today’s world.  It is nothing new to hear.  By now, every company is aware of the existence of cybersecurity threats and the need to try to protect itself.  There are almost daily reports of data breaches and/or ransomware attacks.  Companies spend substantial resources to try to ensure the security of their confidential information, as well as the personal and confidential information of their customers, employees and business partners.  As part of those efforts, companies are faced with managing and understanding their various legal and regulatory obligations governing the protection, disclosure and/or sharing of data – depending on their specific industry and the type of data they handle – as well as meeting the expectations of their customers to avoid reputational harm.

Despite the many steps involved in developing wide-ranging cybersecurity protocols – such as establishing a security incident response plan, designating someone to be responsible for cybersecurity and data privacy, training and retraining employees, and requiring passwords to be changed regularly – it is not enough merely to manage risks internal to the company.  Companies are subject to third-party factors not within their immediate control, in particular vendors and employee BYOD (Bring Your Own Device).  If those cybersecurity challenges are not afforded sufficient oversight, they will expose a company to significant risks that will undo all of the company’s hard work trying to secure and defend its data from unauthorized disclosures or cyberattacks.  Although companies may afford some consideration to vendor management and BYOD policies, absent rigorous follow up, a company may too easily leave a gaping hole in its cybersecurity protections.

VENDORS

To accomplish business functions and objectives and to improve services, companies regularly rely on third-party service providers and vendors.  To that end, vendors may get access to and get control over confidential or personal information to perform the contracted services.  That information may belong to the company, employees of the company, the clients of the company and/or business partners of the company.

When information is placed into the hands of a vendor and/or onto its computer systems, stored in its facilities, or handled by its employees or business partners, the information is subject to unknown risks based on what could happen to the information while with the third-party.  The possibility of a security breach or the unauthorized use or access to the information still exists but a company cannot be sure what the vendor will do to protect against or address those dangers if they arise.  A company cannot rely on its vendors to maintain necessary security protocols and instead must be vigilant by exercising reasonable due diligence over its vendors and instituting appropriate protections.  To achieve this task, a company needs to consider the type of information involved, the level of protection required, the risks at issue and how those risks can be managed and mitigated.

Due Diligence

A company must perform due diligence over the vendor and the services to be provided and should consider, among other things, supplying a questionnaire to the vendor to answer a host of cybersecurity related questions including:

> What services will the vendor provide?  Gain an understanding of the services being provided by the vendor, including whether the vendor only gains access to, or actually takes possession of, any information.  There is an important difference between a vendor (i) having access to a company’s network to implement a third-party solution or provide a thirdparty service and (ii) taking possession of and/or storing information on its network or even the network of its own third-party vendors.

> Who will have access to the information?  A company should know who at the vendor will have access to the information.  Which employees?  Will the vendor need assistance from other third-parties to provide the contracted-for services?  Does the vendor perform background checks of its employees?  Do protocols exist to prevent employees who are not authorized from having access to the information?

> What security controls does the vendor have in place?  A company should review the vendor’s controls and procedures to make sure they comply not only with applicable legal and regulatory requirements but also with the company’s own standards.  Does the vendor have the financial wherewithal to manage cybersecurity risks?  Does the vendor have cybersecurity insurance?  Does the vendor have a security incident response plan?  To what extent has the vendor trained with or used the plan?  Has the vendor suffered a cyberattack?  If so, it actually may be a good thing depending on how the vendor responded to the attack and what, if anything, it did to improve its security following the attack.  What training is in place for the vendor’s employees?  How is the vendor monitoring itself to ensure compliance with its own procedures?

The Contract

A company should seek to include strong contractual language to obligate the vendor to exercise its own cybersecurity management and to cooperate with the company to ensure protection of the company’s data.  There are multiple provisions to consider when engaging vendors and drafting or updating contracts to afford the company appropriate protections.  A one-size-fits-all approach for vendors will not work and clauses will need to be modified to take account of, among other things:

 > The sensitivity of the information at issue – Does the information include only strictly confidential information, such as trade secrets or news of a potential merger?  Does the information include personal information, such as names, signatures, addresses, email addresses, or telephone numbers?  Does the information include what is considered more highly sensitive personal information, such as SSNs, financial account information, credit card information, tax information, or medical data?

> The standard of care and obligations for the treatment of information – A company should want its vendors to meet the same standards the company demands of itself.  Vendors should be required to acknowledge that they will have access to or will take possession of information and that they will use reasonable care to perform their services, including the collection, access, use, storage, disposal, transmission and disclosure of information, as applicable.  This can, and often should, include: limiting access to only necessary employees; securing business facilities, data centers, paper files, servers and back-up systems; implementing database security protocols, including authentication and access controls; encrypting highly sensitive personal information; and providing privacy security training to employees.  Contracts also should provide that vendors are responsible for any unauthorized receipt, transmission, storage, disposal, use, or disclosure of information, including the actions and/or omissions of their employees and/or relevant third-parties who the vendors retain.

> Expectations in the event of a security breach at the company – A company should include a provision requiring a vendor’s reasonable cooperation if the company experiences a breach.  A company should have a contact at each of its vendors, who is available 24/7 to help resolve a security breach.  Compliance with a company’s own obligations to deal with a breach (including notification or remediation) could be delayed if a vendor refuses to timely provide necessary information or copies of relevant documents.  A company also can negotiate to include an indemnification provision requiring a vendor to reimburse the company for reasonable costs incurred in responding to and mitigating damages caused by any security breach related to the work performed by the vendor.

> Expectations in the event of a security breach at the vendor – A company should demand reasonable notification if the vendor experiences a security breach and require the vendor to take reasonable steps and use best efforts to remediate the breach and to try to prevent future breaches.  A company should negotiate for a provision permitting the company to audit the vendor’s security procedures and perhaps even to physically inspect the vendor’s servers and data storage facilities if the data at issue is particularly sensitive.

Monitoring

Due diligence and contractual provisions are necessary steps in managing the cybersecurity risks that a vendor presents, but absent consistent and proactive monitoring of the vendor relationship, including periodic audits and updates to vendor contracts, all prior efforts to protect the company in this respect will be undermined.  Determining who within the company is responsible for the relationship  – HR? Procurement? Legal? – is critical to help manage the vendor relationship.

> Schedule annual or semi-annual reviews of the vendor relationship –  A company not only should confirm that the vendor is following its cybersecurity protocols but also should inquire if any material changes to those protocols have been instituted that impact the manner in which the vendor handles the company’s data.  Depending on the level of sensitivity of the data being handled by the vendor, a company may consider retaining a third-party reviewer to evaluate the vendor.

> Update the vendor contract, as necessary – A company employee should be responsible to review vendor contracts annually to determine if any changes are necessary in view of cybersecurity concerns.

BYOD

Ransomware – where a hacker demands a ransom to unencrypt a company’s data caused by malicious software that the hacker deposited onto the company’s network to hold it hostage – certainly is a heightened concern for all companies.  It is the fastest growing malware targeting all industries, with more than 50% growth in recent years.  Every company is wary of ransomware and is trying to do as much as possible to protect itself from hackers.  The best practices against ransomware are to (i) periodically train and retrain your employees to be on the lookout for ransomware; (ii) constantly backup you data systems; and (iii) split up the locations where data is maintained to limit the damage in the event some servers fall victim to ransomware.  One thing that easily is overlooked, however, or is afforded more limited consideration, is a company’s BYOD policy and enforcement of that policy.

Permitting a company’s employees to use their own personal electronic devices to work remotely will lower overhead costs and improve efficiency but will bring a host of security and compliance concerns.  The cybersecurity and privacy protocols that the company established and vigorously pursues inside the company must also be followed by its employees when using their personal devices – home computers, tablets, smartphones – outside the company.  Employees likely are more interested, however, in the ease of access to work remotely than in ensuring that proper cybersecurity measures are followed with respect to their personal devices.  Are the employees using sophisticated passwords on their personal devices or any passwords at all?  Do the employees’ personal devices have automatic locks?  Are the employees using the most current software and installing security updates?

These concerns are real.  In May of 2017, the Wannacry ransomware attack infected more than 200,000 computers in over 100 countries, incapacitating companies and hospitals.  Hackers took advantage of the failure to install a patch to Microsoft Windows, which Microsoft had issued weeks earlier.  Even worse, it was discovered that some infected computers were using outdated versions of Microsoft Windows for which the patch would not have worked regardless.  Companies cannot risk pouring significant resources into establishing a comprehensive security program only to suffer a ransomware attack or otherwise to have its efforts undercut by an employee working remotely who failed to install appropriate security protocols on his/her personal devices.

The dangers to be wary of include, among others: > Personal devices may not automatically lock or have a timeout function. > Employees may not use sophisticated passwords to protect their personal devices. > Employees may use unsecured Wi-Fi hotspots to access the company’s systems, subjecting the company to heightened risk. > Employees may access the company’s systems using outdated software that is vulnerable to cyberattacks.

Combatting the Dangers

To address the added risks that accompany allowing BYOD, a company must develop, disseminate and institute a comprehensive BYOD policy.  That policy should identify the necessary security protocols that the employee must follow to use a personal device to work remotely, including, among other things:

 > Sophisticated passwords

> Automatic locks

> Encryption of data

> Installation of updated software and security apps

> Remote access from secure WiFi only

> Reporting procedures for lost/stolen devices

A company also should use mobile device management technology to permit the company to remotely access the personal devices of its employees to install any necessary software updates or to limit access to company systems.  Of course, the employee must be given notice that the company may use such technology and the capabilities of that technology.  Among other things, mobile device management technology can:

> Create a virtual partition separating work data and personal data

> Limit an employee’s access to work data

> Allow a company to push security updates onto an employee’s personal device

Enforcement

Similar to vendor management, the cybersecurity efforts undertaken by having a robust BYOD policy in place, or even using mobile management technology, are significantly weakened unless a company enforces the policy it has instituted.

> A BYOD policy should be a prominent part of any employee cybersecurity training.

> The company should inform the employee of the company’s right to access/monitor/delete information from an employee’s personal device in the event of, among other things, litigation and e-discovery requests, internal investigations, or the employee’s termination.

CONCLUSION

Implementing the above recommendations will not guarantee a company will not suffer a breach but will stem the threats created by third-party aspects of its cybersecurity program.  Even if a company ultimately suffers a breach, having had these protections in place to administer the risks associated with vendor management and BYOD certainly will help safeguard the company from the scrutiny of regulators or the criticism of their customers, which would be worse!

This post was written byJoseph B. Shumofsky of  Sills Cummis & Gross P.C.
More legal analysis at The National Law Review.