Center for Devices and Radiological Health (CDRH) Schedules January 2016 Cybersecurity Workshop

Center for Devices and Radiological Health, CDRH has scheduled a cybersecurity workshop entitled, “Moving Forward: Collaborative Approaches to Medical Device Cybersecurity,” on January 20-21, 2016 (see here for the Federal Register announcement).

Background and Workshop Context

As we discussed in a previous post, cybersecurity vulnerability is an increasing concern as medical devices are becoming more connected to the Internet, hospital networks, and other medical devices. Cybersecurity vulnerabilities may result in device malfunction, interruption of healthcare services including treatment interventions, inappropriate access to patient information, and breached electronic health record data integrity.

In the Federal Register announcement for the workshop, FDA states protecting the Healthcare and Public Health (HPH) critical infrastructure from attack by strengthening cybersecurity is a “high priority” of the Federal Government. For example, two recent Executive Orders (here and here) address enhancing cybersecurity infrastructure and increasing cybersecurity information sharing. Additionally, Presidential Policy Directive 21 states that the Federal Government shall work with the private sector to manage risk and strengthen the security and resilience of critical infrastructure against cyber threats.

Given this context, FDA, other governmental agencies, and public/private partnerships have sought to address cybersecurity vulnerability in recent years. For example, last year, CDRH finalized its guidance for industry entitled, “Content of Premarket Submissions for Management of Cybersecurity in Medical Devices.” Also in 2014, the National Institute of Standards and Technology (NIST) published a voluntary, risk-based framework focusing on enhanced cybersecurity. According to FDA, the HPH sector has utilized the framework to help manage and limit cybersecurity risks.

Workshop Objectives

At the public workshop, CDRH hopes to address vulnerability management throughout the medical device total product lifecycle. According to the Federal Register announcement, vulnerability management includes: analyzing how a vulnerability may affect device functionality, evaluating the vulnerability effect across product types, and selecting temporary solutions that may be employed until a permanent fix can be implemented. Vulnerabilities can be identified by the device manufacturer or external entities, including healthcare facilities, researchers, and other sectors of critical infrastructure.

The Agency believes an important component of vulnerability management is coordinated vulnerability disclosure (also known as responsible disclosure). Under coordinated vulnerability disclosure, all stakeholders agree to delay publicizing vulnerability details for a certain period of time, while the affected manufacturer works to rectify the vulnerability.

Further, CDRH states that one of the tools medical device manufacturers or healthcare facilities may use to evaluate and manage vulnerability is the Common Vulnerability Scoring System (CVSS). CVSS is a risk assessment tool that “provides an open and standardized method for rating information technology vulnerabilities.” CDRH notes, however, that CVSS does not directly incorporate patient risk and public health impact factors.

Workshop Themes

CDRH states that it hopes to address the following general themes during the workshop:

  • Envisioning a roadmap for coordinated vulnerability disclosure and vulnerability management as part of the broader effect to create a trusted environment for information sharing.

  • Sharing FDA’s current thinking on the implementation of the NIST framework in the medical device total product lifecycle.

  • Adapting cybersecurity and/or risk assessment tools such as CVSS for the medical device operational environment.

  • Adapting and/or implementing existing cybersecurity standards for medical devices.

  • Understanding the challenges that manufacturers face as they increase collaboration with external third parties (cybersecurity researchers, Information Sharing and Analysis Organizations (ISAOs), and end users), to resolve cybersecurity vulnerabilities that impact their devices.

  • Gaining situational awareness of the current activities of the HPH sector to enhance medical device cybersecurity.

  • Identifying cybersecurity gaps and challenges that persist in the medical device ecosystem and begin crafting action plans to address them.

Persons interested in attending the workshop must register online by January 13, 2016. Public comments concerning the workshop’s objectives or general themes can be submitted online or by mail.

© 2015 Covington & Burling LLP

Government Forces Awaken: Rise of Cyber Regulators in 2016

As the sun sets on 2015, but before it rises again in the New Year, we predict that, in the realm of cyber and data security, 2016 will become known as the “Rise of the Regulators.” Regulators across numerous industries and virtually all levels of government will be brandishing their cyber enforcement and regulatory badges and announcing: “We’re from the Government and we’re here to help.”

The Federal Trade Commission will continue to lead the charge in 2016 as it has for the last several years. Pursuing its mission to protect consumers from unfair trade practices, including from unauthorized disclosures of personal information, and with more than 55 administrative consent decrees and other actions booked so far, the FTC (for now) remains the most experienced cop on the beat.   As we described earlier this year, the FTC arrives with bolstered judicial-enforcement authority following the Third Circuit’s decision in the Wyndham Hotel case.  Notwithstanding the relatively long list of administrative actions and its published guidance – businesses that are hacked and that lose consumer data, are at risk of attracting the attention of FTC cops and of proving that their cyber-related systems, acts and practices were “reasonable.”

But the FTC is not alone. In electronic communications, the Federal Communications Commission (FCC) in 2015 meted out $30 million in fines to telecom and cable providers, including to AT&T ($25 million) and Cox Communications ($595K). And this agency, increasingly known for its enforcement activism, may have just begun.  Reading its regulatory authority broadly, the FCC has asserted a mandate to take “such actions as are necessary to prevent unauthorized access” to customers’ personally identifiable information. This proclamation, combined with the enlistment of the FCC’s new cyber lawyer/computer scientist wunderkind to lead that agency’s cyber efforts, places another burly cop on the cyber beat.

The Securities and Exchange Commission (SEC) will be patrolling the securities and financial services industries. Through its Office of Compliance Inspections and Examinations (OCIE), the SEC is assessing cyber preparedness in the securities industry, including investment firms’ ability to protect broker-dealer and investment adviser customer information. It has commenced at least one enforcement action based on the agency’s “Safeguards Rule” (Rule 30(a) of Regulation S‑P), which applies the privacy provisions in Title V of the Gramm-Leach-Bliley Act (GLBA) to all registered broker-dealers, investment advisers, and investment companies. With criminals hacking into networks and stealing customer and other information from financial services and other companies, expect more SEC investigations and enforcement actions in 2016.

Moving to the Department of Defense (DoD), new rules, DFARS clauses, and regulations (e.g., DFARS subpart 204.73, 252.204–7012, and  32 CFR § 236) are likely to prompt the DoD Inspector General and, perhaps, the Defense Contracting Auditing Agency (DCAA) to examine whether certain defense contractors have the required security controls in place.  Neither the DoD nor its auditors have taken action to date.  But don’t mistake a lack of overt action for a lack interest (or planning).  It would come as no surprise if, by this time next year, the DoD has launched its first cyber-regulation mission, be it by the False Claims Act, suspension and debarment proceedings, or through terminations for default.

In addition to these cyber guardians, other federal agencies suiting up for cyber enforcement include:

  • The Consumer Financial Protection Board’s (CFPB) growing Cybersecurity Program Management Office;

  • The Department of Energy’s (DOE) Office of Electricity Delivery and Energy Reliability, examining the security surrounding critical infrastructure systems;

  • The Office of Civil Rights (OCR) of the U.S. Department of Health and Human Services, addressing healthcare providers and health insurers’ compliance with health information privacy and security safeguard requirements; and

  • The Food and Drug Administration, examining the cybersecurity for networked medical devices containing off-the-shelf (OTS) software.

But these are just some of the federal agencies poised for action.   State regulators are imposing their own sector-specific cyber security regimes as well.   For example, the State of California’s Cybersecurity Task Force, New York’s Department of Financial Services, and Connecticut’s Public Utility Regulatory Agency are turning their attention toward cyber regulation. We believe that other states will join the fray in 2016.

At this relatively early stage of standards and practices development, the National Institute of Standards and Technology (NIST) 2014 Cyber Security Framework lays much of the foundation for current and future systems, conduct, and practices. The NIST framework is a “must read.” NIST, moreover, has provided additional guidance earlier this year in its June 2015 NIST Special Publication 800-171, Protecting Controlled Unclassified Information in Nonfederal Information Systems and Organizations.  While addressing security standards for nonfederal information systems (i.e., government contractors’ information systems), it also provides important guidance for companies who do not operate within the government contracts sphere.  Ultimately, this 2015 NIST publication may serve as an additional general standard against which regulators (and others) may assess institutional cybersecurity environments in 2016 – and beyond.

But for now, the bottom line is that in 2016 companies now must add to its list of actual or potential cyber risks and liability, the hydra-headed specter of multi-sector, multi-tiered government regulation – and regulators.

Happy Holidays: VTech Data Breach Affects Over 11 million Parents and Children Worldwide

The recent data breach of Hong Kong-based electronic toy manufacturer VTech Holdings Limited (“VTech” or the “Company”) is making headlines around the world for good reason: it exposed sensitive personal information of over 11 million parents and children users of VTech’s Learning Lodge app store, Kid Connect network, and PlanetVTech in 16 countries! VTech’s Learning Lodge website allows customers to download apps, games, e-books and other educational content to their VTech products, the Kid Connect network allows parents using a smartphone app to chat with their children using a VTech tablet, and PlanetVTech is an online gaming site. As of December 3rd, VTech has suspended all its Learning Lodge sites, the KidConnect network and thirteen other websites pending investigation.

VTech announced the cyberattack on November 27th by press release and has since issued follow-on press releases on November 30th and December 3rd, noting that “the Learning Lodge, Kid Connect and PlanetVTech databases have been attacked by a skilled hacker” and that the Company is “deeply shocked by this orchestrated and sophisticated attack.” According to the various press releases, upon learning of the cyber attack, VTech “conducted a comprehensive check of the affected site” and has “taken thorough actions against future attacks.” The Company has reported that it is currently working with FireEye’s Mandiant Incident Response services and with law enforcement worldwide to investigate the attack. According to VTech’s latest update on the incident:

  • 4, 854, 209 parent Learning Lodge accounts containing the following information were affected: name, email address, secret question and answer for password retrieval, IP address, mailing address, download history and encrypted passwords;

  • 6,368,509 children profile containing the following information were affected: name, gender, and birthdate were affected. 1.2 million of the affected profiles have enabled the Kid Connect App, meaning that the hackers could also have access to profile photos and undelivered Kid Connect chat messages;

  • The compromised databases also include encrypted Learning Lodge content (bulletin board postings, ebooks, apps, games etc.), sales report logs and progress logs to track games, but, it did not include credit card, debit card or other financial account information or Social Security numbers, driver’s license numbers, or ID card numbers; and

  • The affected individuals are located in the following countries: USA, Canada, United Kingdom, Republic of Ireland, France, Germany, Spain, Belgium, the Netherlands, Denmark, Luxembourg, Latin America, Hong Kong, China, Australia and New Zealand. The largest number of affected individuals are reported in the U.S. (2,212,863 parent accounts and 2,894,091 children profiles), France (868,650 parent accounts and 1,173,497 children profiles), the UK (560,487 parent accounts and 727,155 children profiles), and Germany (390,985 parent accounts and 508,806 children profiles).

Given the magnitude and wide territorial reach of the VTech cyber attack, the incident is already on the radar of regulators in Hong Kong and at least two attorneys general in the United States. On December 1, the Hong Kong Office of the Privacy Commissioner for Personal Data announced that it has initiated “a compliance check on the data leakage incident” of VTech Learning Lodge.  In addition, on December 3rd, two separate class actions have already been filed against VTech  Electronics North America, L.L.C. and VTech Holdings Limited in the Northern District of Illinois.  Since the data breach compromised personal information of children located in the United States (first and last name, photographs, online contact information, etc.), it is likely that the Federal Trade Commission (FTC) will investigate VTech’s compliance with the Children’s Online Privacy Protection Act (“COPPA”) and its implementing rule (as amended, the “COPPA Rule”). If a COPPA violation is found, the civil penalties can be steep and go up to $16,000 per violation. In addition to civil penalties imposed by a court, the FTC can require an entity to implement a comprehensive privacy program and to obtain regular, independent privacy assessments for a period of time.

©1994-2015 Mintz, Levin, Cohn, Ferris, Glovsky and Popeo, P.C. All Rights Reserved.

Target to Pay Nearly $40 Million to Settle with Banks over Data Breach; Total Costs Reach $290 Million

A settlement filed Wednesday provides that Target Corp. will pay $39.4 million to the banks and credit unions who brought class action claims against the retailer for alleged losses the financial institutions suffered as a result of Target’s 2013 data breach.  The breach, which impacted as many as 110 million individuals, compromised as many as 40 million credit cards.

This most recent settlement comes on the heels of a $67 million settlement with Visa, and a $10 million settlement with consumers, both earlier this year.  The most recent settlement brings Target’s total costs to a staggering $290 million.  Target expects insurers to reimburse it for only $90 million of that total, and shareholder derivative lawsuits are still pending, as well as regulatory enforcement and investigation actions by the FTC and various state attorneys general.

While financial institution settlements now top $100 million, trade groups representing banks and credit unions have argued that the Target breach actually cost their members more than $200 million.

Many will recall that the Target breach began after an HVAC vendor was hacked, providing cyber criminals access to Target’s backend system through its vendor interface.  While the breadth and scope of Target’s losses are somewhat mind numbing, this settlement should serve as yet another reminder why a strong vendor management system including privacy and data security policies and audits is especially important in this day and age.

© Polsinelli PC, Polsinelli LLP in California

Hacking Health Care: When Cybersecurity Can Mean Life or Death

cybersecurityMillions of Americans rely on implantable medical devices to stay alive. These battery-operated devices communicate through wireless transmissions — and can be hacked like any other wireless device. For example, a wireless pacemaker regulates a person’s heartbeat and records the heart’s activity, and then transmits this information to doctors who can reprogram the pacemaker. The interconnectivity between medical devices and clinical systems leaves wireless medical devices vulnerable to security breaches.

Cybersecurity no longer just applies to computer networks and financial data; modern implantable medical devices have the same vulnerability and also require cybersecurity. In fact, in a span of six months, hackers attempted to log into MRI and defibrillator machines over ten thousand times and attempted to download malware approximately 300 times. Had these hackers been successful, they could have accessed patients’ personal information or reprogrammed the defibrillators to deliver deadly jolts of electricity to patients’ hearts.

The government is already taking action. In 2014, the U.S. Food and Drug Administration (FDA) responded to these threats with guidance on how medical device manufacturers could improve the safety of implantable medical devices. The FDA advised manufacturers that their failure to develop cybersecurity controls could lead to repercussions including “compromised device functionality, loss of data (medical or personal) availability or integrity, or exposure of other connected devices or networks to security threats. This in turn may have the potential to result in patient illness, injury, or death.”

[I]n a span of six months, hackers attempted to log into MRI and defibrillator machines . . .

Further, as manufacturers well know, when a device malfunctions and causes bodily injury, consumers typically allege product liability claims. Patients whose devices are hacked could raise claims for design defects and failure to warn of the risk of cyber-vulnerabilities. These potential victims likely never considered their life-saving medical devices could be used as a weapon. For most people, the idea that someone would attack a medical device seems unfathomable.

So, what motivates attacks on implanted medical devices? According to Dr. William Maisel, “[m]otivation for such actions might include the acquisition of private information for financial gain or competitive advantage; damage to a device manufacturer’s reputation; sabotage by a disgruntled employee, dissatisfied customer or terrorist to inflict financial or personal injury; or simply the satisfaction of the attacker’s ego.” Medical data can be worth ten times as much as a credit card number. Added to that, the medical device market was a $25.2 billion industry in 2012 and is expected to be a $33.6 billion industry by 2018. That’s a vast market of potential victims.

© 2015 Schiff Hardin LLP

It’s (Not) Academic: Cybersecurity Is a Must for Universities and Academic Medical Centers

Cutting-edge research institutions need cutting-edge cybersecurity to protect their IP and critical personal and financial data.  Universities hold vast repositories of valuable information, including student healthcare information, patient information from academic medical centers, and financial and personal data from applicants, donors, students, faculty, and staff.  So it’s no surprise hackers have been targeting universities lately—in fact, at least eight American universities (including Harvard, UC Berkeley, University of Maryland, and Indiana University) have announced cyber intrusions over the past two years.

With the cost of a data breach averaging $3.8 million,[1] universities cannot afford to pretend cybercrime won’t happen to them.  For institutions with health records, the financial costs can be even greater (as high as $360 per record!), due to the high value of health records on the internet’s black market, the “Dark Web.”

But, the dollars may not mean as much as the bad PR—having your institution’s name in national headlines, risking research funding from governments or corporate partners, losing protected and sensitive IP, fielding calls from angry donors, students, and parents whose personal information has been compromised, and defending multiple civil suits—all because the institution failed to assess its cyber liability.  (See additional information on assessing cyber liability).

For major research institutions holding valuable IP, health records, and grants for sensitive research, having a cybersecurity prevention and remediation plan is more than just a good idea, it’s an absolute must.  And these cybersecurity measures must extend beyond mere “compliance.”  The Federal Government will continue to create cybersecurity regulations, but their regulations never will keep up with the risks.  A university’s administration answers to the Federal Government, to its Board, to its donors, to the media, to its students and faculty, and to the general public. None of these constituencies will be calmed by minimal compliance with outdated regulations.

Instead, universities can address their cybersecurity risks with some initial measures to prevent intrusions and to minimize the damage if a hacker does get through:

  • Protections against Insider Threats: Attacks by insiders accounted for more than 50% of the cyberattacks in 2014. To help mitigate these threats, create an insider threat team and build a holistic approach to security—include staff from IT and technology, legal, physical security, and human resources. Emphasize training of employees, faculty, and administrators in basic cybersecurity awareness to instill habits that will better protect the institution.

  • Enhance Network Security Policies and Procedures: Implement security precautions to make a hack more difficult. For example: create enhanced protocols to prevent unauthorized access to devices and systems, including multi-factor authentication; provide broad and frequent updates to computers on-campus and for computers that regularly access campus networks; and prevent access to compromised sites by incorporating controls into your network.

  • Cyber Intrusion TestingWork with a vendor to test the institution’s current cybersecurity vulnerabilities and get advice on how to reduce those vulnerabilities.

  • Corrective Action Plan: —one that includes disclosure and mitigation efforts. Importantly, if an institution holds government contracts or grants, follow the required disclosure protocols for cyber intrusion (note that agencies may differ in their disclosure and mitigation requirements).

  • Cyber Insurance: —particularly those with academic medical centers and/or sensitive research programs—should ensure their policies are large enough to cover a worst-case scenario.While a comprehensive cybersecurity plan will require additional systematic and long-term efforts, taking these steps will at least keep an institution off of a hacker’s list of “low-hanging fruit.”

Copyright © 2015, Sheppard Mullin Richter & Hampton LLP.


[1] Ponemon Institute, Cost of Data Breach Study (2015).  Note this average does not include mega-breaches like those experienced by Home Depot, Target, or Sony Pictures.

 

Three Trending Topics in IoT: Privacy, Security, and Fog Computing

Cisco has estimated that there will be 50 billion Internet of Things (IoT) devices connected to the Internet by the year 2020. IoT has been a buzzword over the past couple of years. However, the buzz surrounding IoT in the year 2015 has IoT enthusiasts particularly exerted. This year, IoT has taken center stage at many conferences around the world, including the Consumer Electronics Show (CES 2015), SEMI CON 2015, and Createc Japan, among others.

1. IoT will Redefine the Expectations of Privacy

Privacy is of utmost concern to consumers and enterprises alike. For consumers, the deployment of IoT devices in their homes and other places where they typically expect privacy will lead to significant privacy concerns. IoT devices in homes are capable of identifying people’s habits that are otherwise unknown to others. For instance, a washing machine can track how frequently someone does laundry, and what laundry settings they prefer. A shower head can track how often someone showers and what temperature settings they prefer. When consumers purchase these devices, they may not be aware that these IoT devices collect and/or monetize this data.

The world’s biggest Web companies, namely, Google, Facebook, LinkedIn, and Yahoo are currently involved in lawsuits where the issues in the lawsuits relate to consent and whether the Web companies have provided an explicit enough picture of what data is being collected and how the data is being used. To share some perspective on the severity of the legal issues relating to online data collection, more than 250 suits have been filed in the U.S. in the past couple of years against companies’ tracking of online activities, compared to just 10 in the year 2010. As IoT devices become more prevalent, legal issues relating to consent and disclosure of how the data is being collected, used, shared or otherwise monetized will certainly arise.

2. Data and Device Security is Paramount to the Viability of an IoT Solution

At the enterprise level, data security is paramount. IoT devices can be sources of network security breaches and as such, ensuring that IoT devices remain secure is key. When developing and deploying IoT solutions at the enterprise level, enterprises should conduct due diligence to prevent security breaches via the IoT deployment, but also ensure that even if an IoT device is compromised, access to more sensitive data within the network remains secure. Corporations retain confidential data about their customers and are responsible for having adequate safeguards in place to protect the data. Corporations may be liable for deploying IoT solutions that are easily compromised. As we have seen with the countless data breaches over the past couple of years, companies have a lot to lose, financially and otherwise.

3. Immediacy of Access to Data and Fog Computing

For many IoT solutions, timing is everything. Many IoT devices and environments are “latency sensitive,” such that actions need to be taken on the data being collected almost instantaneously. Relying on the “cloud” to process the collected data and generate actions will likely not be a solution for such IoT environments, in which the immediacy of access to data is important. “Fog computing” aims to bring the storage, processing and data intelligence closer to the IoT devices deployed in the physical world to reduce the latency that typically exists with traditional cloud-based solutions. Companies developing large scale IoT solutions should investigate architectures where most of the processing is done at the end of the network and closer to the physical IoT devices.

The Internet of Things has brought about new challenges and opportunities for technology companies. Privacy, security and immediacy of access to data are three important trends companies must consider going forward.

© 2015 Foley & Lardner LLP

Cyber Liability: The Risks of Doing Business in a Digital World

Major security and data breaches have become more prevalent in the past decade. News headlines are dominated by stories of major corporations having networks hacked and subjecting employees’ and customers’ personal, financial and health information to cyber threats. Perhaps one of the following from 2014 will sound familiar:

  • January: Snapchat had the names and phone numbers of 4.5 million users compromised

  • February: Kickstarter had personal information from 5.6 million donors compromised

  • May: Ebay‘s database of 145 million customers was compromised.

  • September: iCloud had celebrity photostreams hacked

  • November: Sony Pictures had the highest profile hack of the year involving email accounts, video games and movie releases

While the news headlines make it is easy to think this is an issue for large, Fortune 500 companies, the risk is equally widespread, but much less publicized, for small businesses.

While the data breaches at small businesses do not garner the same attention as the data breaches occurring at Sony or iCloud, the impact to the organization and the liability the organization incurs are largely the same.

Although there are many studies available giving analytics on the types of data breaches that occur, those most common to small businesses can be described in three general categories: unintentional/miscellaneous errors, insider misuse and theft/loss.

Unintentional and miscellaneous errors are any mistake that compromises security by posting private data to a public site accidentally, sending information to the wrong recipients or failing to dispose of documents or assets securely. For example, have any of your employees ever accidentally sent an order (with account information) to the wrong email address?

Insider misuse is not a situation where an accidental error occurs. Rather, an employee or someone with access to the information intentionally accesses the data to use it for an unlawful purpose. For example, a disgruntled clerk in the billing department accesses customer information to obtain name, date of birth and bank account information in order to fraudulently establish a credit card in that customer’s name. Consider another scenario where a third party vendor, a benefits provider, for example, handles employee information. Once transmitted, the employer loses control over information security for that data. Savvy business owners will make sure their contracts with vendors make the vendor responsible for any data breach that occurs during the engagement and that it will indemnify the business for any actions arising from such a breach.

Data breaches also result from physical theft or loss of laptops, tablets, smart phones, USB drives or even printed documents. Consider a scenario where the Human Resource director is heading to a conference and her laptop is stolen at the airport. The laptop is not encrypted or pass coded and the thief can access all the employee files the director keeps on her computer.

In the past decade, laws have been aimed at narrowing the information that can initially be collected by businesses and with whom it can be shared, as well as mitigating the breach after it occurs.

Federal regulations like the Health Insurance Portability and Accountability Act (HIPAA) limit the collection and use of protected health information, and also has requirements for entities suffering a data breach, including customer notification and damage mitigation provisions, such as mandatory credit monitoring and fraud protection for affected customers.

The Personal Information Protect Act requires government agencies, corporations, universities, retail stores or other entities that handle nonpublic personal information to notify each Illinois resident who may be affected by a breach of data security. 815 ILCS 530/1 et seq. Personal information is defined as: an individual’s first name or first initial and last name in combination with any one or more of the following data elements, when either the name or the data elements are not encrypted or redacted:

  1. Social security number.

  2. Driver’s license number or State identification card number.

  3. Account number or credit card or debit card number, or an account number or credit card number in combination with any required security code, access code, or password that would permit access to an individual’s financial account.

The required notice to Illinois residents must include contact information for credit reporting agencies and the Federal Trade Commission, along with a statement that the individual can obtain information from those sources about fraud alerts and security freezes. 815 ILCS 530/10(a). If the data breached is data that the entity owns or licenses, the notice must be made without unreasonable delay. Id. If the data breached is data that the entity does not own or license, notice must be made immediately. 815 ILCS 530/10(b).

Failure to notify affected consumers is a violation of the Illinois Consumer Fraud and Deceptive Business Practices Act. 815 ILCS 530/20.

Technology is everywhere. Smart phones, tablets, laptops, the internet, online bill payments and the like have changed the way businesses operate. There is no denying that technology allows for efficient and effective commerce and communication. Unfortunately, the same technology that allows for faster and more efficient commerce and communication also subjects businesses to new forms of risk when it comes to data security.

There are risk management tools that all businesses should be aware of and using on a daily basis. Anti-virus software, passwords on all devices, frequent back up of data, encryption for sensitive information transmitted electronically are just a few.

What if a business owner takes all the steps necessary to reduce the risk of a data breach and it still occurs? There is a way to reduce damages and to shorten the recovery and restoration timeframes.

Cyber Liability insurance can protect businesses, large and small, from data breaches that result from malicious hacking or other non-malicious digital risks. This specific line of insurance was designed to insure consumers of technology services or products for liability and property losses that may result when a business engages in various electronic activities, such as selling on the internet or collecting data within its internal electronic network.

Most notably, cyber and privacy policies cover a business’ liability for data breaches in which the customer’s personal information (such as social security or credit card numbers) is exposed or stolen by a hacker.

As you might imagine, the cost of a data breach can be enormous. Costs arising from a data breach can include: forensic investigation, legal advice, costs associated with the mandatory notification of third parties, credit monitoring, public relations, losses to third parties, and the fines and penalties resulting from identity theft.

While most businesses are familiar with their commercial insurance policies providing general liability (CGL) coverage to protect the business from injury or property damage, most standard commercial line polices do not cover many of the cyber risks mentioned above. Furthermore, cyber and privacy insurance is often confused with technology errors and omissions (tech E&O) insurance. However, tech E&O coverage is intended to protect providers of technology products and services such as computer software and hardware manufacturers, website designers, and firms that store corporate data on an off-site basis. Cyber risks are more costly. The size and scope of the services a business provides will play a role in coverage needs and pricing, as will the number of customers, the presence on the internet, and the type of data collected and stored. Cyber Liability polices might include one or more of the following types of coverage:

  • Liability for security or privacy breaches (including the loss of confidential information by allowing or failing to prevent unauthorized access to computer systems).

  • The costs associated with a privacy breach, such as consumer notification, customer support and costs of providing credit monitoring services to affected customers.

  • Costs of data loss or destruction (such as restoring, updating or replacing business assets stored electronically).

  • Business interruption and extra expense related to a security or privacy breach.

  • Liability associated with libel, slander, copyright infringement, product disparagement or reputational damage to others when the allegations involve a business website, social media or print media.

  • Expenses related to cyber extortion or cyber terrorism.

Coverage for expenses related to regulatory compliance for billing errors, physician self-referral proceedings and Emergency Medical Treatment and Active Labor Act proceedings.

While cyber liability insurance may not be right for all businesses, those that actively use technology to operate should consider the risks they would be exposed to if a data breach occurred. In addition, there are many different cyber policy exclusions and endorsements. Not all policies are created equal

While cyber liability insurance may not be right for all businesses, those that actively use technology to operate should consider the risks they would be exposed to if a data breach occurred. In addition, there are many different cyber policy exclusions and endorsements. Not all policies are created equal.

DoD Issues Targeted Class Deviation Updating Recently Adopted Cybersecurity DFARS Clauses

Last week, on October 8th, DoD issued a class deviation replacing DFARS 252.204-7012 and 252.204-2008 with revised clauses that give covered contractors up to nine (9) months (from the date of contract award or modification incorporating the new clause(s)) to satisfy the requirement for “multifactor authentication for local and network access” found in Section 3.5.3 of National Institute of Standards and Technology (NIST) Special Publication 800-171, “Protecting Controlled Unclassified Information in Nonfederal Information Systems and Organizations.”

We previously reported on the August 26th Department of Defense (DoD) interim rule that greatly expanded the obligations imposed on defense contractors for safeguarding “covered defense information” and for reporting cybersecurity incidents involving unclassified information systems that house such information. The interim rule, which went into effect immediately, requires non-cloud contractors to comply with several new requirements, including those in DFARS 252.204-7012, Safeguarding Covered Defense Information and Cyber Incident Reporting” and DFARS 252.204-7008, “Compliance with Safeguarding Covered Defense Information Controls.”  While the class deviation is a welcomed development for contractors that may struggle to implement the NIST SP 800-171 requirements for multifactor authentication, the deviation: (1) requires contractors to notify the government if they need more time to satisfy those requirements, and (2) does not alter any other aspect of the August 26th interim rule. 

DFARS 252.204-7012 requires prime contractors and their subcontractors to employ “adequate security” measures to protect “covered defense information.” Specifically, contractors must adhere to the security requirements in the version of NIST SP 800-171 that is in effect “at the time the solicitation is issued or as authorized by the Contracting Officer,” or employ alternative security measures approved in writing by an authorized representative of the DOD Chief Information Officer. Special Publication 800-171 describes fourteen families of basic security requirements. As described in section 2.2 of 800-171, each of these fourteen families has “derived security requirements,” which provide added detail of the security controls required to protect government data. These basic requirements are based on FIPS Publication 200, which “provides the high level and fundamental security requirements” for government information systems. The derived requirements are taken from the security controls contained in NIST Publication 800-53, “Security and Privacy Controls for Federal Information Systems and Organizations.” Among those derived requirements is one for “multifactor authentication for local and network access.”

DoD contractors and subcontractors should be aware of what the class deviation does and does not change:

  1. Effective immediately, DoD contractors and subcontractors are required to comply with the clauses at DFARS 252.204-7012, Safeguarding Covered Defense Information and Cyber Incident Reporting (DEVIATION 2016-O0001) (OCT 2015) and DFARS 252.204-7008, Compliance with Safeguarding Covered Defense Information Controls (DEVIATION 2016-O0001) (OCT 2015), in lieu of the clauses that were issued as part of the August 26th interim rule.
  2. Under the new clauses, DoD contractors (and subcontractors, through the prime contractor) may notify the contracting officer that they need up to 9 months (from the date of award or the date of a modification incorporating the new clauses) to comply with the requirements for “multifactor authentication for local and network access” in Section 3.5.3 of NIST SP 800-171.
  3. The revised clauses apply to all DoD contracts and subcontracts, including those for the acquisition of commercial items.
  4. The class deviation only impacts non-cloud contractor information systems that are not operated on behalf of the government (e.g., contractor internal systems).
  5. DoD contractors and subcontractors that cannot meet the specific requirements of NIST 800-171, including the requirements of Section 3.5.3, may still seek authorization from DoD to use “[a]lternative but equally effective security measures.”
  6. With the exception of the targeted changes to DFARS 252.204-7012 and DFARS 252.204-7008 (i.e., affording contractors up to 9 months to comply with Section 3.5.3 of NIST 800-171, provided they notify the contracting officer), all other requirements introduced by the August 26th interim rule remain in effect.
  7. Non-cloud contractor information systems that are operated on behalf of the government remain “subject to the security requirements specified [in their contracts].”
  8. The class deviation does not impact DoD cloud computing contracts, which remain subject to DFARS 252.239-7010, Cloud Computing Services.

Ensuring Compliance With the Revised DFARS Clauses and NIST SP 800-171 Section 3.5.3

During the solicitation phase of a procurement subject to the revised DFARS clauses, DoD contractors and subcontractors should engage technical experts to determine whether they would need additional time to satisfy the NIST requirements for multifactor authentication. If a contractor determines that additional time is needed, and is later awarded a contract subject to the new requirements, then the contractor should immediately notify the contracting officer in writing and should ensure that all subsequent communications with the government are adequately documented.

Upon providing such notice, contractors will have up to nine months (from the date of contract award or modification incorporating the revised clauses) to comply with Section 3.5.3 of NIST SP 800-171, which requires contractors to: “Use multifactor authentication for local and network access to privileged accounts and for network access to non-privileged accounts.” See NIST SP 800-171, Section 3.5.3 (emphasis added). Section 3.5.3 is a derived requirement of the basic security requirement in section 3.5 for identification and authentication. Section 3.5.3 of NIST SP 800-171 notes that:

  • “Multifactor authentication” requires two or more different factors to achieve authentication. Factors include: (i) something you know (e.g., password/PIN); (ii) something you have (e.g., cryptographic device, token); or (iii) something you are (e.g., biometric). The requirement for multifactor authentication does not require the use of a federal Personal Identification Verification (PIV) card or Department of Defense Common Access Card (CAC)-like solutions. Rather, “[a] variety of multifactor solutions (including those with replay resistance) using tokens and biometrics are commercially available. Such solutions may employ hard tokes (e.g., smartcards, key fobs, or dongles) or soft tokens to store user credentials. See id., n. 22.
  • “Local access” is any access to an information system by a user (or process acting on behalf of a user) communicating through a direct connection without the use of a network.

“Network access” is any access to an information system by a user (or a process acting on behalf of a user) communicating through a network (e.g., local area network, wide area network, Internet).

UK Government Launches Cybersecurity Service For Healthcare Organizations

The UK government has announced a new national service providing expert cybersecurity advice to entities within the National Health Service (NHS) and the UK’s broader healthcare system.  The project, called CareCERT (Care Computing Emergency Response Team), is aiming for a full go-live in January 2016.

Acording to recent press releases, CareCERT will:

  • “Provide incident response expertise for the management of cyber security incidents and threats across health and care system”;

  • “Broadcast potential cyber threats and necessary actions to take across the sector, to ensure cyber threats are safely dealt with”;

  • “Be a central source of security intelligence for health and care by working with cross government monitoring partners such as GovCertUK and CERT-UK”;

  • “Support the analysis of emerging and future threats through unique analysis tools and reporting”; and

  • “Be a trusted source of security best practice and guidance”.

CareCERT will be run by the Health and Social Care Information Centre (HSCIC).  The HSCIC is an important offshoot of the UK Department of Health, overseeing information assurance and patient privacy within the NHS as part of its broader role in setting health IT standards, assisting IT rollout throughout the NHS, and managing the release of healthcare statistics for the NHS.

CareCERT is expected to be a natural evolution of HSCIC’s existing function and expertise.  In particular, under the HSCIC/Department of Health’s data breach reporting policy (imposed on NHS bodies and their suppliers through contract), HSCIC is already one of the bodies notified and involved in the event of serious data breaches in the public healthcare sector.  The creation of CareCERT will enhance the HSCIC’s incident response capabilities, and will give NHS suppliers an increased opportunity to engage with HSCIC proactively (for guidance and threat alerts), rather than only after serious incidents take place.

Article by Mark Young & Philippe Bradley-Schmieg of Covington & Burling

© 2015 Covington & Burling LLP