Wave of the Future or a Step Too Far? Wisconsin Company Offers Employees Microchip Implants, Employment Issues Abound

When wireless is perfectly applied the whole earth will be converted into a huge brain, which in fact it is, all things being particles of a real and rhythmic whole. We shall be able to communicate with one another instantly, irrespective of distance. . . . and the instruments through which we shall be able to do his will be amazingly simple compared with our present telephone. A man will be able to carry one in his vest pocket.

–Nikola Tesla, 1926

While we may now take Tesla’s connected world for granted, one cannot help but wonder what readers thought of his predictions in 1926 when he made the above statements in a magazine interview. It remains to be seen whether a similar pattern of skepticism, realization, and acceptance will eventually emerge regarding news that a vending machine company is offering its employees the opportunity to have microchips embedded in their hands to allow more convenient access to facilities, computers, and financial accounts.

The Wisconsin-based employer is reportedly the first in the United States to offer microchips (at a cost to the employer of $300 each) to employees on a voluntary basis. The microchip, roughly the size of a grain of rice, would be inserted into an employee’s hand between the thumb and forefinger, and could be used instead of a key to access buildings, log onto computers or printers, and even pay for goods in the company’s break room. It is not unlike fingerprint or other biometric technology that is becoming more widely used. In this case, however, the pertinent information is stored on the embedded microchip.

The company noted that in the future, the chip may also be able to store medical information or be used for transactions outside of the company. The chip’s technology is not, however, currently able to use GPS to track employees’ whereabouts.

Employers considering whether to implement such emerging technology may want to carefully assess whether the convenience outweighs the risks. Among the legal issues are the following:

Personal Privacy

While the company making headlines has stated that it will not use the technology to track its employees’ whereabouts (and the technology cannot currently support GPS monitoring), embedded microchips like this could create an electronic trail of the employee’s whereabouts whenever the employee is scanned to access secured locations.

Depending on where access points are installed, an employer could gain useful information, such as how long an employee spent in the break room, in the same vicinity as another employee who was allegedly harassed, or where material went missing. Further, having a record of frequent “check-ins” throughout the day as the employee accesses buildings, printers, computers, vehicles, etc. might aid in verifying time records for payroll purposes or compliance with delivery schedules and other customer expectations. This technology is already available to employers through access cards, login PINs, and other devices. The embedded chip would be another technology to use for that purpose, and it would be more difficult to trick the system with “buddy punches” and other surreptitious behavior with microchip technology. On the other hand, an employer could also theoretically confirm how long an employee spent in the restroom, at a union meeting, or complaining to human resources.

If embedded chips ever advance to the point of supporting GPS, a current body of case law regarding non-embedded GPS devices (like phones and devices installed on company vehicles) offers insights into potential legal risks. Companies use these technologies to track the whereabouts of employees, but that also gives companies information that could form the basis of a discrimination claim. For example, a company may learn that an employee is regularly at a medical clinic, which the employee might use to claim disability discrimination. Or, in Wisconsin where state law protects against discrimination based on the use or non-use of lawful products, the employer might learn that the employee spends a lot of off-duty time at the neighborhood bar, which could lead to a claim that the employee was discriminated against for using legal products while not on duty.

In addition, requiring GPS tracking of employees’ whereabouts is a mandatory subject of bargaining for unionized employees. Even for non-union employees, courts have found that employers go too far if they track employees’ whereabouts in places where employees would have a reasonable expectation of privacy (like their homes). Public employers face even greater risks in using GPS technology because courts have found that GPS technology may qualify as a search under the Fourth Amendment.

Data Privacy

Information from the chip (e.g., banking information and medical information) has value and could be the target of theft. Just as personal information could be hacked from other company databases and infrastructure, hacking may be a possibility with this new technology. Because the chip is provided by the employer, would the employer be liable for damages resulting from the misappropriation of stolen information? If an employer were negligent in implementing security protocols on the microchips, there could be litigation over the employer’s liability.

Workers’ Compensation

If an employee has a medical reaction from the implant or the procedure of implanting the chip (for example, developing an infection), there is a possibility that the medical reaction could give rise to a workers’ compensation claim because the chip was provided by the employer for work-related reasons.

Medical and Religious Accommodation

The employer in question here is not requiring employees to embed the chips, but requiring employees to do so would be difficult. Employees would likely have a right to opt out of the requirement based on medical or religious objections. It is not unlike requiring employees to get an annual flu vaccine. Some employees are medically unable and must be granted a medical accommodation under the Americans with Disabilities Act and applicable state laws (absent an undue hardship to the employer). Others may object on religious grounds and therefore qualify for accommodations on that basis.  At least one court has supported an employee’s right to decline on religious grounds far less invasive biometric access technology.

A Look Into the Future

While the microchips currently in use appear to serve limited purposes, it is not farfetched that the technology will continue to develop and allow new uses. Employees may be comfortable with the current use, but not with future uses. Clear communication with employees as to the capabilities and uses of the chip would be essential to minimizing legal risk.

Even more practically, the technology of the chip itself may become outdated or employees might leave their employment with the company and the company would need to determine what to do with the chip already embedded into the employee. This could create medical challenges in removing the chip or controversies with the employee over who has rights to the chip itself or is obligated to pay for its removal.

While the company at issue here has not made implanting a microchip a condition of employment, social, economic, and practical influences could leave employees with little alternative. Just like the convenience of direct deposit has made paper payroll checks virtually obsolete, so too the convenience of chip technology may render physical keys, identification badges, and similar access control measures a thing of the past. Why risk losing or forgetting your identification badge when you can guarantee the necessary data is with you at all times? Financially, it seems likely that an employer could offer an incentive to employees who make use of the chip technology much like auto insurance companies offer premium reductions to those who permit tracking of their driving habits. Many employers already offer shift premiums, are chip premiums on the horizon?

Ultimately, while this developing technology may certainly provide some added convenience and may not be all that significant a departure from our society’s current reliance on mobile devices, embedding a microchip into an employee’s body takes the invasiveness of the technology and the legal ramifications one step further and requires a thoughtful weighing of the risks versus the benefits.

More legal analysis is added daily at The National Law Review.

This post was written by Keith E. Kopplin  and Sarah J. Platt of Ogletree, Deakins, Nash, Smoak & Stewart, P.C..

Third-Party Aspects of Cybersecurity Protections: Beyond your reach but within your control

Data privacy and cybersecurity issues are ongoing concerns for companies in today’s world.  It is nothing new to hear.  By now, every company is aware of the existence of cybersecurity threats and the need to try to protect itself.  There are almost daily reports of data breaches and/or ransomware attacks.  Companies spend substantial resources to try to ensure the security of their confidential information, as well as the personal and confidential information of their customers, employees and business partners.  As part of those efforts, companies are faced with managing and understanding their various legal and regulatory obligations governing the protection, disclosure and/or sharing of data – depending on their specific industry and the type of data they handle – as well as meeting the expectations of their customers to avoid reputational harm.

Despite the many steps involved in developing wide-ranging cybersecurity protocols – such as establishing a security incident response plan, designating someone to be responsible for cybersecurity and data privacy, training and retraining employees, and requiring passwords to be changed regularly – it is not enough merely to manage risks internal to the company.  Companies are subject to third-party factors not within their immediate control, in particular vendors and employee BYOD (Bring Your Own Device).  If those cybersecurity challenges are not afforded sufficient oversight, they will expose a company to significant risks that will undo all of the company’s hard work trying to secure and defend its data from unauthorized disclosures or cyberattacks.  Although companies may afford some consideration to vendor management and BYOD policies, absent rigorous follow up, a company may too easily leave a gaping hole in its cybersecurity protections.

VENDORS

To accomplish business functions and objectives and to improve services, companies regularly rely on third-party service providers and vendors.  To that end, vendors may get access to and get control over confidential or personal information to perform the contracted services.  That information may belong to the company, employees of the company, the clients of the company and/or business partners of the company.

When information is placed into the hands of a vendor and/or onto its computer systems, stored in its facilities, or handled by its employees or business partners, the information is subject to unknown risks based on what could happen to the information while with the third-party.  The possibility of a security breach or the unauthorized use or access to the information still exists but a company cannot be sure what the vendor will do to protect against or address those dangers if they arise.  A company cannot rely on its vendors to maintain necessary security protocols and instead must be vigilant by exercising reasonable due diligence over its vendors and instituting appropriate protections.  To achieve this task, a company needs to consider the type of information involved, the level of protection required, the risks at issue and how those risks can be managed and mitigated.

Due Diligence

A company must perform due diligence over the vendor and the services to be provided and should consider, among other things, supplying a questionnaire to the vendor to answer a host of cybersecurity related questions including:

> What services will the vendor provide?  Gain an understanding of the services being provided by the vendor, including whether the vendor only gains access to, or actually takes possession of, any information.  There is an important difference between a vendor (i) having access to a company’s network to implement a third-party solution or provide a thirdparty service and (ii) taking possession of and/or storing information on its network or even the network of its own third-party vendors.

> Who will have access to the information?  A company should know who at the vendor will have access to the information.  Which employees?  Will the vendor need assistance from other third-parties to provide the contracted-for services?  Does the vendor perform background checks of its employees?  Do protocols exist to prevent employees who are not authorized from having access to the information?

> What security controls does the vendor have in place?  A company should review the vendor’s controls and procedures to make sure they comply not only with applicable legal and regulatory requirements but also with the company’s own standards.  Does the vendor have the financial wherewithal to manage cybersecurity risks?  Does the vendor have cybersecurity insurance?  Does the vendor have a security incident response plan?  To what extent has the vendor trained with or used the plan?  Has the vendor suffered a cyberattack?  If so, it actually may be a good thing depending on how the vendor responded to the attack and what, if anything, it did to improve its security following the attack.  What training is in place for the vendor’s employees?  How is the vendor monitoring itself to ensure compliance with its own procedures?

The Contract

A company should seek to include strong contractual language to obligate the vendor to exercise its own cybersecurity management and to cooperate with the company to ensure protection of the company’s data.  There are multiple provisions to consider when engaging vendors and drafting or updating contracts to afford the company appropriate protections.  A one-size-fits-all approach for vendors will not work and clauses will need to be modified to take account of, among other things:

 > The sensitivity of the information at issue – Does the information include only strictly confidential information, such as trade secrets or news of a potential merger?  Does the information include personal information, such as names, signatures, addresses, email addresses, or telephone numbers?  Does the information include what is considered more highly sensitive personal information, such as SSNs, financial account information, credit card information, tax information, or medical data?

> The standard of care and obligations for the treatment of information – A company should want its vendors to meet the same standards the company demands of itself.  Vendors should be required to acknowledge that they will have access to or will take possession of information and that they will use reasonable care to perform their services, including the collection, access, use, storage, disposal, transmission and disclosure of information, as applicable.  This can, and often should, include: limiting access to only necessary employees; securing business facilities, data centers, paper files, servers and back-up systems; implementing database security protocols, including authentication and access controls; encrypting highly sensitive personal information; and providing privacy security training to employees.  Contracts also should provide that vendors are responsible for any unauthorized receipt, transmission, storage, disposal, use, or disclosure of information, including the actions and/or omissions of their employees and/or relevant third-parties who the vendors retain.

> Expectations in the event of a security breach at the company – A company should include a provision requiring a vendor’s reasonable cooperation if the company experiences a breach.  A company should have a contact at each of its vendors, who is available 24/7 to help resolve a security breach.  Compliance with a company’s own obligations to deal with a breach (including notification or remediation) could be delayed if a vendor refuses to timely provide necessary information or copies of relevant documents.  A company also can negotiate to include an indemnification provision requiring a vendor to reimburse the company for reasonable costs incurred in responding to and mitigating damages caused by any security breach related to the work performed by the vendor.

> Expectations in the event of a security breach at the vendor – A company should demand reasonable notification if the vendor experiences a security breach and require the vendor to take reasonable steps and use best efforts to remediate the breach and to try to prevent future breaches.  A company should negotiate for a provision permitting the company to audit the vendor’s security procedures and perhaps even to physically inspect the vendor’s servers and data storage facilities if the data at issue is particularly sensitive.

Monitoring

Due diligence and contractual provisions are necessary steps in managing the cybersecurity risks that a vendor presents, but absent consistent and proactive monitoring of the vendor relationship, including periodic audits and updates to vendor contracts, all prior efforts to protect the company in this respect will be undermined.  Determining who within the company is responsible for the relationship  – HR? Procurement? Legal? – is critical to help manage the vendor relationship.

> Schedule annual or semi-annual reviews of the vendor relationship –  A company not only should confirm that the vendor is following its cybersecurity protocols but also should inquire if any material changes to those protocols have been instituted that impact the manner in which the vendor handles the company’s data.  Depending on the level of sensitivity of the data being handled by the vendor, a company may consider retaining a third-party reviewer to evaluate the vendor.

> Update the vendor contract, as necessary – A company employee should be responsible to review vendor contracts annually to determine if any changes are necessary in view of cybersecurity concerns.

BYOD

Ransomware – where a hacker demands a ransom to unencrypt a company’s data caused by malicious software that the hacker deposited onto the company’s network to hold it hostage – certainly is a heightened concern for all companies.  It is the fastest growing malware targeting all industries, with more than 50% growth in recent years.  Every company is wary of ransomware and is trying to do as much as possible to protect itself from hackers.  The best practices against ransomware are to (i) periodically train and retrain your employees to be on the lookout for ransomware; (ii) constantly backup you data systems; and (iii) split up the locations where data is maintained to limit the damage in the event some servers fall victim to ransomware.  One thing that easily is overlooked, however, or is afforded more limited consideration, is a company’s BYOD policy and enforcement of that policy.

Permitting a company’s employees to use their own personal electronic devices to work remotely will lower overhead costs and improve efficiency but will bring a host of security and compliance concerns.  The cybersecurity and privacy protocols that the company established and vigorously pursues inside the company must also be followed by its employees when using their personal devices – home computers, tablets, smartphones – outside the company.  Employees likely are more interested, however, in the ease of access to work remotely than in ensuring that proper cybersecurity measures are followed with respect to their personal devices.  Are the employees using sophisticated passwords on their personal devices or any passwords at all?  Do the employees’ personal devices have automatic locks?  Are the employees using the most current software and installing security updates?

These concerns are real.  In May of 2017, the Wannacry ransomware attack infected more than 200,000 computers in over 100 countries, incapacitating companies and hospitals.  Hackers took advantage of the failure to install a patch to Microsoft Windows, which Microsoft had issued weeks earlier.  Even worse, it was discovered that some infected computers were using outdated versions of Microsoft Windows for which the patch would not have worked regardless.  Companies cannot risk pouring significant resources into establishing a comprehensive security program only to suffer a ransomware attack or otherwise to have its efforts undercut by an employee working remotely who failed to install appropriate security protocols on his/her personal devices.

The dangers to be wary of include, among others: > Personal devices may not automatically lock or have a timeout function. > Employees may not use sophisticated passwords to protect their personal devices. > Employees may use unsecured Wi-Fi hotspots to access the company’s systems, subjecting the company to heightened risk. > Employees may access the company’s systems using outdated software that is vulnerable to cyberattacks.

Combatting the Dangers

To address the added risks that accompany allowing BYOD, a company must develop, disseminate and institute a comprehensive BYOD policy.  That policy should identify the necessary security protocols that the employee must follow to use a personal device to work remotely, including, among other things:

 > Sophisticated passwords

> Automatic locks

> Encryption of data

> Installation of updated software and security apps

> Remote access from secure WiFi only

> Reporting procedures for lost/stolen devices

A company also should use mobile device management technology to permit the company to remotely access the personal devices of its employees to install any necessary software updates or to limit access to company systems.  Of course, the employee must be given notice that the company may use such technology and the capabilities of that technology.  Among other things, mobile device management technology can:

> Create a virtual partition separating work data and personal data

> Limit an employee’s access to work data

> Allow a company to push security updates onto an employee’s personal device

Enforcement

Similar to vendor management, the cybersecurity efforts undertaken by having a robust BYOD policy in place, or even using mobile management technology, are significantly weakened unless a company enforces the policy it has instituted.

> A BYOD policy should be a prominent part of any employee cybersecurity training.

> The company should inform the employee of the company’s right to access/monitor/delete information from an employee’s personal device in the event of, among other things, litigation and e-discovery requests, internal investigations, or the employee’s termination.

CONCLUSION

Implementing the above recommendations will not guarantee a company will not suffer a breach but will stem the threats created by third-party aspects of its cybersecurity program.  Even if a company ultimately suffers a breach, having had these protections in place to administer the risks associated with vendor management and BYOD certainly will help safeguard the company from the scrutiny of regulators or the criticism of their customers, which would be worse!

This post was written byJoseph B. Shumofsky of  Sills Cummis & Gross P.C.
More legal analysis at The National Law Review.

Weapons in the Cyber Defense Arsenal

In May 2017, the world experienced an unprecedented global cyberattack that targeted the public and private sectors, including an auto factory in France, dozens of hospitals and health care facilities in the United Kingdom, gas stations in China and banks in Russia. This is just the tip of the iceberg and more attacks are certain to follow. As this experience shows, companies of all sizes, across all industries, in every country are vulnerable to cyberattacks that can have devastating consequences for their businesses and operations.

The Malware Families

Exploiting vulnerabilities in Microsoft® software, hackers launched a widespread ransomware attack targeting hundreds of thousands of companies worldwide. The vector, “WannaCry” malware, encrypts electronic files and locks them until released by the hacker after a ransom is paid in untraceable Bitcoin. The malware also has the ability to spread to all other computer systems on a network. On the heels of WannaCry, a new attack called “Adylkuzz” is crippling computers by diverting their processing power.

The most prevalent types of ransomware found in 2016 were Cerber and Locky. Microsoft detected Cerber, used in spam campaigns, in more than 600,000 computers and observed that it was one of the most profitable of 2016. Spread via malicious spam emails that have an executable virus file, Cerber has gained increasing popularity due to its Ransomware-as-a-Service (RaaS) business model, which enables less sophisticated hackers to lease the malware.

data security privacy FCC cybersecurityCheck Point Software indicated that Locky was the second most prevalent piece of malware worldwide in November 2016.  Microsoft detected Locky in more than 500,000 computers in 2016. First discovered in February 2016, Locky is typically delivered via an email attachment (including Microsoft Office documents and compressed attachments) in phishing campaigns designed to entice unsuspecting individuals to click on the attachment. Of course, as the most recent global attacks demonstrate, hackers are devising and deploying new variants of ransomware with different capabilities all the time.

The Rise of Ransomware Attacks

The rise in ransomware attacks is directly related to the ease with which it is deployed and the quick return for the attackers. The U.S. Department of Justice has reported that there was an average of more than 4,000 ransomware attacks daily in 2016, a 300 percent increase over the prior year. Some experts believe that ransomware may be one of the most profitable cybercrime tactics in history, earning approximately $1 billion in 2016. Worse yet, even with the ransom paid, some data already may have been compromised or may never be recovered.

The risk is even greater if your ransom-encrypted data contains protected health information (PHI). In July 2016, the U.S. Department of Health and Human Services, Office of Civil Rights (HHS/OCR) advised that the encryption or permanent loss of PHI would trigger HIPAA’s Breach Notification Rule for the affected population, unless a low probability that the recovered PHI had been compromised could be demonstrated. This means a mandated investigation to confirm the likelihood that the PHI was not accessed or otherwise compromised.

Ransomware Statistics

According to security products and solutions provider Symantec Corporation, ransomware was the most dangerous cybercrime threat facing consumers and businesses in 2016:

  • The majority of 2016 ransomware infections happened in consumer computers, at 69 percent, with enterprises at 31 percent.

  • The average ransom demanded in 2016 rose to $1,077, up from $294 in 2015.

  • There was a 36 percent increase in ransomware infections from 340,665 in 2015 to 463,841 in 2016.

  • The number of ransomware “families” found totaled 101 in 2016, triple the 30 found in 2015.

  • The biggest event of 2016 was the beginning of RaaS, or the development of malware packages that can be sold to attackers in return for a percentage of the profits.

  • Since January 1, 2016, more than 4,000 ransomware attacks have occurred − a 300 percent increase over the 1,000 daily attacks seen in 2015.

  • In the second half of 2016, the percentage of recognized ransomware attacks from all malware attacks globally doubled from 5.5 percent to 10.5 percent.

The Best Defense Is a Good Offense

While no perfectly secure computer system exists, companies can take precautionary measures to increase their preparedness and reduce their exposure to potentially crippling cyberattacks. While Microsoft no longer supports Windows XP operating systems, which were hit the hardest by WannaCry, Microsoft has made an emergency patch available to protect against WannaCry. However, those still using Windows XP should upgrade all devices to a more current operating system that is still fully supported by Microsoft to ensure protection against emerging threats. Currently, that means upgrading to Windows 7, Windows 8 or Windows 10.

Even current, supported software needs to be updated when prompted by the computer. Those who delay installing updates may find themselves at risk. Microsoft issued a patch for supported operating systems in March 2017 to protect against the vulnerability that WannaCry exploited. Needless to say, many companies did not bother to patch their systems in a timely manner.

Ransomware creates even greater business disruption when a company does not have secure backups of files that are critical to key business functions and operations. It also is important for companies to back up files frequently, because a stale backup that is several months old or older may not be particularly useful. Companies also should make certain that their antivirus and anti-malware software is current to protect against emerging threats.

In addition, companies need to train their employees on detecting and mitigating potential cyber threats. Employees are frequently a company’s first line of defense against many forms of routine cyberattacks that originate from seemingly innocuous emails, attachments and links from unknown sources. Indeed, many cyberattacks can be avoided if employees are simply trained not to click on suspicious links or attachments that could surreptitiously install malware.

Last but not least, companies should consider purchasing cyber liability insurance coverage, which is readily available. While cyber policies are still evolving and there are no standardized policy forms, coverage can be purchased at varying price points with different levels of coverage. Some of the more comprehensive forms of coverage provide additional “bells and whistles” such as immediate access to preapproved professionals that can guide companies through the legal and technical web of cybersecurity events and incident response.

Other cyber policies afford bundled coverages that may include:

  • The costs of a forensics investigation to identify the source and scope of an incident

  • Notification to affected individuals

  • Remediation in the form of credit monitoring and identity theft restoration services

  • Costs to restore lost, stolen or corrupted data and computer equipment

  • Defense of third-party claims and regulatory investigations arising out of a cyberattack.

 

This post was written by Anjali C. Das, Kevin M. Scott and John Busch of Wilson Elser Moskowitz Edelman & Dicker LLP.data security privacy FCC cybersecurity

Health Care Task Force Pre-Releases Report on Cybersecurity Days Before Ransomware Attack

Last week, the Health Care Industry Cybersecurity (HCIC) Task Force (the “Task Force”) published a pre-release copy of its report on improving cybersecurity in the health care industry.  The Task Force was established by Congress under the Cybersecurity Act of 2015.  The Task Force is charged with addressing challenges in the health care industry “when securing and protecting itself against cybersecurity incidents, whether intentional or unintentional.”

The Task Force released its report mere days before the first worldwide ransomware attack, commonly referred to as “WannaCry,” which occurred on May 12.  The malware is thought to have infected more than 300,000 computers in 150 jurisdictions to date.  In the aftermath of the attack, the U.S. Department of Health and Human Services (HHS) sent a series of emails to the health care sector, including a statement that government officials had “received anecdotal notices of medical device ransomware infection.”  HHS warned that the health care sector should particularly focus on devices that connect to the Internet, run on Windows XP, or have not been recently patched.  As in-house counsels understand, the ransomware attack raises a host of legal issues.

Timely, the HCIC report calls cybersecurity a “key public health concern that needs immediate and aggressive attention.”  The Task Force identifies six high-level imperatives, and for each imperative, offers several recommendations.

The imperatives are as follows:

  1. Define and streamline leadership, governance, and expectations for health care industry cybersecurity.

  2. Increase the security and resilience of medical devices and health IT.

  3. Develop the health care workforce capacity necessary to prioritize and ensure cybersecurity awareness and technical capabilities.

  4. Increase health care industry readiness through improved cybersecurity awareness and education.

  5. Identify mechanisms to protect research and development efforts and intellectual property from attacks or exposure.

  6. Improve information sharing of industry threats, weaknesses, and mitigations.

With respect to medical devices (imperative #2), the Task Force specifically advocates for greater transparency regarding third party software components.  The report encourages manufacturers and developers to create a “bill of materials” that describes its components, as well as known risks to those components, to enable health care delivery organizations to move quickly to determine if their medical devices are vulnerable.  Furthermore, the Task Force writes that product vendors should be transparent about their ability to provide IT support during the lifecycle of a medical device product.  The Task Force also recommends that health care organizations ensure that their systems, policies, and processes account for the implementation of available updates and IT support for medical devices, such as providing patches for discovered vulnerabilities.  The report suggests that government and industry “develop incentive recommendations to phase-out legacy and insecure health care technologies.”

The Task Force also encourages medical device manufacturers to implement “security by design,” including by making greater security risk management a priority throughout the product lifecycle, such as through adding greater testing or certification. In addition, the report encourages both developers and users to take actions that improve security access to information stored on devices, such as through multi-factor authentication.  The Task Force recommends that government agencies, such as the U.S. Food and Drug Administration (FDA) and the Office of the National Coordinator for Health Information Technology (ONC) at HHS, consider using existing authorities to “catalyze and reinforce activities and action items” associated with this recommendation.  This includes leveraging existing government guidance and industry standards, like FDA’s premarket and postmarket cybersecurity guidance documents.  Published in 2014 and 2016, these documents recommend that “manufacturers should monitor, identify, and address cybersecurity vulnerabilities and exploits as part of the [secure development lifecycle].”  We have previously discussed these guidance documents here and here.

Finally, the Task Force recommends that the health care industry take a “long-range approach” to considering “viability, effectiveness, security, and maintainability of” medical devices. The Task Force states that each product should have a defined strategy and design that supports cybersecurity during each stage of the product’s lifecycle.  In particular, the Task Force encourages HHS to evaluate existing authorities to conduct cybersecurity surveillance of medical devices.

This post was written by Dena Feldman and Christopher Hanson of Covington & Burling LLP.

Yesterday, #WannaCry. Today, #DocuSignPhish

Another day, another data incident.  If you use DocuSign, you’ll want to pay attention.

The provider of e-signature technology has acknowledged a data breach incident in which an unauthorized third party gained access to the email addresses of DocuSign users.   Those email addresses have now been used to launch a massive spam campaign.   By using the stolen email address database and sending “official” looking emails, cyber criminals are hoping that recipients will be more likely to click on and open the malicious links and attachments.

DocuSign’s alert to users says in part:

[A]s part of our ongoing investigation, today we confirmed that a malicious third party had gained temporary access to a separate, non-core system that allows us to communicate service-related announcements to users via email. A complete forensic analysis has confirmed that only email addresses were accessed; no names, physical addresses, passwords, social security numbers, credit card data or other information was accessed. No content or any customer documents sent through DocuSign’s eSignature system was accessed; and DocuSign’s core eSignature service, envelopes and customer documents and data remain secure.

A portion of the phish in the malicious campaign looks like this:

Two phishing campaigns already detected and more likely

The DocuSign Trust Center has posted alerts notifying users of two large phishing campaigns launched on May 9 and again on May 15.

The company is now advising customers NOT TO OPEN emails with the following subject lines, used in the two spam campaigns.

  • Completed: [domain name]  – Wire transfer for recipient-name Document Ready for Signature

  • Completed [domain name/email address] – Accounting Invoice [Number] Document Ready for Signature

We recommend that you change your DocuSign password in light of this incident as an extra measure of caution.  Also, DocuSign (and other similar services) offer two-factor authentication, and we strongly recommend that you take advantage of this extra security measure.

As always, think before you click.

“WannaCry” Ransomware Attack Causes Disruption Globally – With Worst Yet to Come

A ransomware known as “WannaCry” affected 200,000 people in 150 countries over the weekend, locking computer files and demanding payment to release them. As of this morning, Australia and New Zealand users seem to have avoided the brunt of the attack, with the Federal Government only confirming three reports of Australian companies being affected.  Not that ransomware attacks tend to be the subject of reporting – there is quite a high rate of payment of affected users as the pricing is deliberately cheaper than most alternatives unless your back-up process is very good.

The ransomware utilises vulnerabilities in out-of-date, unpatched versions of Microsoft Windows to infect devices. It spreads from computer for computer as it finds exposed targets, without the user having to open an e-mail attachment or click a link as is commonplace in most attacks. Ransom demands start at US$300 and doubles after three days.

The U.K. National Health Service (NHS) was among the worst hit organisations, forcing hospitals to cancel appointments and delay operations as they could not access their patients’ medical records. The Telegraph suggested that 90 percent of NHS trusts were using a 16 year old version of Windows XP which was particularly vulnerable to the attack. More attacks are anticipated throughout the working week as companies and organisations turn on their devices.

The U.K. National Cyber Security Center has released guidance to help both home users and organisations limit the impact of the attacks. It can be read here.

Edwin Tan is co-author of this article. 

Company Awarded Damages After Former Employee Hacks Its Systems and Hijacks Its Website

A company can recover damages from its former employee in connection with his hacking into its payroll system to inflate his pay, accessing its proprietary files without authorization and hijacking its website, a federal court ruled. Tyan, Inc. v. Yovan Garcia, Case No. CV 15-05443- MWF (JPRx) (C.D. Cali. May 2, 2017).

data security privacy FCC cybersecurityThe Defendant worked as a patrol officer for a security company. The company noticed that its payroll system indicated that the Defendant was working substantial overtime hours that were inconsistent with his scheduled hours. Upon further investigation, the company learned that that the Defendant accessed the payroll system without authorization from the laptop in his patrol car. When the company confronted him, the Defendant claimed a competitor hacked the payroll system as a means to pay him to keep quiet about his discovery that the competitor had taken confidential information from the company. A few months later, shortly after the Defendant left the company, the company’s computer system was hacked and its website was hijacked. The company later filed suit against the Defendant alleging he was responsible for the hack and the hijacking.

Following a bench trial, the court concluded the Defendant had used an administrative password the company had not given him to inflate his hours in its payroll system. The court also found the Defendant hijacked the company’s website and posted an unflattering image of the company’s owner on the website. In addition, the court found the Defendant engaged in a conspiracy to steal confidential files from the company’s computer system by accessing it remotely without authorization and destroyed some of the company’s computer files and servers.

The court concluded that the aim of the conspiracy in which the Defendant was engaged was twofold: first, to damage his former employer in an effort to reduce its competitive advantage; and second, to obtain access to those files that gave his former employer its business advantage, and use them to solicit its clients on behalf of a company he started. The court also found that by accessing the company’s protected network to artificially inflate his hours and by participating in the conspiracy to hack the company’s systems, the Defendant was liable for violations of the Computer Fraud Abuse Act, the Stored Communications Act, the California Computer Data Access and Fraud Act, and the California Uniform Trade Secrets Act.

As a result of Defendant’s misconduct, the court awarded the company $318,661.70 in actual damages, including damages for the inflated wages the company paid the Defendant, the cost of consultant services to repair the damage from the hack, increased payroll costs for time spent by employees rebuilding records and databases destroyed in the hack, the resale value of the company’s proprietary files, and lost profits caused by the hack. The court declined to award punitive damages under the California Uniform Trade Secrets Act, but left open the possibility that the Plaintiff may recover its attorneys’ fees at a later date.

Take Away

Companies are reminded that malicious insiders, in particular disgruntled former employees, with access to areas of the system external hackers generally can’t easily access, often result in the most costly data breaches.

Steps should be taken to mitigate insider threats including:

  • Limiting remote access to company systems
  • Increased monitoring of company systems following a negative workplace event such as the departure of a disgruntled employee
  • Changing passwords and deactivating accounts during the termination process

Trump’s First Hundred Days and Cybersecurity

calendar hundred days Executive Order Delay Trumps Administration Policy Development

President Trump’s first hundred days did not produce the event that most people in the cybersecurity community expected – a Presidential Executive Order supplanting or supplementing the Obama administration’s cyber policy – but that doesn’t mean that this period has been uneventful, particularly for those in the health care space.

The events of the period have cautioned us not to look for an imminent Executive Order. While White House cybersecurity coordinator Robert Joyce recently stated that a forthcoming executive order will reflect the Trump administration’s focus on improving the security of federal networks, protecting critical infrastructure, and establishing a global cyber strategy based on international law and deterrence, other policy demands have intruded. Indeed as the 100-day mark approached, President Trump announced that he has charged his son-in-law, Jared Kushner, with developing a strategy for “innovation” and modernizing the government’s information technology networks. This is further complicating an already arduous process for drafting the long-awaited executive order on cybersecurity, sources and administration officials say.

The Importance of NIST Has Been Manifested Throughout the Hundred Days

The expected cyber order likely will direct federal agencies to assess risks to the government and critical infrastructure by using the framework of cybersecurity standards issued by the National Institute of Standards and Technology, a component of the Department of Commerce.

The NIST framework, which was developed with heavy industry input and released in 2014, was intended as a voluntary process for organizations to manage cybersecurity risks. It is not unlikely that regulatory agencies, including the Office of Civil Rights of the Department of Health and Human Services, the enforcement agency for HIPAA, will mandate the NIST framework, either overtly or by implication, as a compliance hallmark and possible defense against sanctions.

NIST has posted online the extensive public comments on its proposed update to the federal framework of cybersecurity standards that includes new provisions on metrics and supply chain risk management. The comments are part of an ongoing effort to further revise the cybersecurity framework. NIST will host a public workshop on May 16-17, 2017

Health Industry Groups Are Urging NIST to Set up a ‘Common’ Framework for Cybersecurity Compliance

Various health care industry organizations including the College of Healthcare Information Management Executives and the Association for Executives in Healthcare Information Security have asked NIST to help the industry develop a “common” approach for determining compliance with numerous requirements for protecting patient data. Looking for a common security standard for compliance purposes, commenters also argue that the multiplicity of requirements for handling patient data is driving up healthcare costs. Thus, the groups urge NIST to work with the Department of Health and Human Services and the Food and Drug Administration “to push for a consistent standard” on cybersecurity. One expects this effort, given strong voice in the First Hundred Days, to succeed.

The Federal Trade Commission is Emerging as the Pre-eminent Enforcement Agency for Data Security and Privacy

With administration approval, the Federal Communications Commission is about to release today a regulatory proposal to reverse Obama-era rules for the internet that is intended to re-establish the Federal Trade Commission as the pre-eminent regulatory agency for consumer data security and privacy. In repealing the Obama’s “net neutrality” order, ending common carrier treatment for ISP and their concomitant consumer privacy and security rules adopted by the FCC, the result would be, according to FCC Chairman Pai, to “restore FTC to police privacy practices” on the internet in the same way that it did prior to 2015. Federal Trade Commission authority, especially with regard to health care, is not without question, especially considering that the FTC’s enforcement action against LabMD is still pending decision in the 9th Circuit. However, the FTC has settled an increasing number of the largest data breach cases The Federal Trade Commission’s acting bureau chief for consumer protection, Thomas Pahl, this week warned telecom companies against trying to take advantage of any perceived regulatory gap if Congress rolls back the Federal Communications Commission’s recently approved privacy and security rules for internet providers.

OCR Isn’t Abandoning the Field; Neither is DoJ

While there have been no signal actions during the First Hundred Days in either agency. The career leadership of both has signaled their intentions not to make any major changes in enforcement policy.  OCR is considering expanding its policies with respect to overseeing compliance programs and extending that oversight to the conduct off Boards of Directors.

The Supreme Court Reaches Nine

Many would argue that the most important, or at least most durable, accomplishment of the Trump Administration to date is the nomination and confirmation of Neil Gorsuch to the Supreme Court. Justice Gorsuch is a conservative in the Scalia mold and is expected to case a critical eye on agency regulatory actions. There is no cybersecurity matter currently on the Supreme Court’s docket, but there will be as the actions and regulations of agencies like the FTC, FCC and DHHS are challenged.

©2017 Epstein Becker & Green, P.C. All rights reserved.

Cybersecurity: Yes, They Will Hack Your Car

Auto Traffic, NightimeAuto manufacturers are increasingly equipping vehicles with rapidly advancing technologies, raising concerns regarding how the public will be affected by these changes. Manufacturers are beginning to implement automated driving and vehicle-to-vehicle (V2V) communication capabilities into their cars, extending potential cybersecurity threats and associated safety issues to road users.

As consumers, we already see cybersecurity threats and breaches in many areas of our day-to-day lives. With the spike of auto-driven and connected cars across the auto industry, these same threats and breaches have a strong potential to sprout in our lives on the road as well.

NHTSA has outlined the factors it will consider in evaluating cybersecurity threats as potential safety-related defects. They are as follows:

  • The amount of time elapsed since the vulnerability was discovered (e.g., less than one day, three months, or more than six months)

  • The level of expertise needed to exploit the vulnerability (e.g., whether a layman can exploit the vulnerability or whether it takes an expert to do so)

  • The accessibility of knowledge of the underlying system (e.g., whether how the system works is public knowledge or whether it is sensitive and restricted)

  • The necessary window of opportunity to exploit the vulnerability (e.g., an unlimited window or a very narrow window)

  • The level of equipment needed to exploit the vulnerability (e.g., standard or highly specialized)

Additionally, NHTSA’s guidance suggests policies that manufacturers :

  • Participating in the Automotive Information Sharing and Analysis Center (Auto-ISAC), which became fully operational in January 2016

  • Developing policies around reporting and disclosure of vulnerabilities to external cybersecurity researchers

  • Instituting a documented process for responding to incidents, vulnerabilities, and exploits and running exercises to test the effectiveness of these processes

  • Developing a documentation process that will allow self-auditing, which may include risk assessments, penetration test results, and organizational decisions

  • For original equipment, developing processes to ensure vulnerabilities and incidents are shared with appropriate entities throughout the supply chain

  • As vehicle technologies continue to progress, we expect that NHTSA’s guidance will evolve to address future concerns

To continue reading through NHTSA’s enforcement plans on motor vehicle safety as it pertains to recent technological advances, be sure to check out Thursday’s post on automated vehicle regulations.

© 2017 Foley & Lardner LLP

The Department Of Homeland Security Proposes New Rules Affecting Federal Government Contractors

This week, the Department of Homeland Security (“DHS”) issued three proposed rules expanding data security and privacy requirements for contractors and subcontractors. The proposed rules build upon other recent efforts by various federal agencies to strengthen safeguarding requirements for sensitive government information.  Given the increasing emphasis on data security and privacy, contractors and subcontractors are well advised to familiarize themselves with these new requirements and undertake a careful review of their current data security and privacy procedures to ensure they comply.

  • Privacy Training

DHS contracts currently require contractor and subcontractor employees to complete privacy training before accessing a Government system of records; handling Personally Identifiable Information and/or Sensitive Personally Identifiable Information; or designing, developing, maintaining, or operating a Government system of records. DHS proposes including this training requirement in the Homeland Security Acquisition Regulation (“HSAR”) and to make the training more easily accessible by hosting it on a public website.  By including the rule in the HSAR, DHS would standardize the obligation across all DHS contracts.  The new rule would require the training to be completed within thirty days of the award of a contract and on an annual basis thereafter.

DHS invites comment on the proposed rule. In particular, DHS asks commenters to offer their views on the burden, if any associated with the requirement to complete DHS-developed privacy training.  DHS also asks whether the industry should be given the flexibility to develop its own privacy training.  Comments must be submitted on or before March 20, 2017.

  • Information Technology Security Awareness Training

DHS currently requires contractor and subcontractor employees to complete information technology security awareness training before accessing DHS information systems and information resources. DHS proposes to amend the HSAR to require IT security awareness training for all contractor and subcontractor employees who will access (1) DHS information systems and information resources or (2) contractor owned and/or operated information systems and information resources capable of collecting, processing, storing or transmitting controlled unclassified information (“CUI”) (defined below).  DHS will require employees to undergo training and to sign DHS’s Rules of Behavior (“RoB”) before they are granted access to those systems and resources.  DHS also proposes to make this training and the RoB more easily accessible by hosting them on a public website.  Thereafter, annual training will be required.  In addition, contractors will be required to submit training certification and signed copies of the RoB to the contracting officer and maintain copies in their own records.

Through this proposed rule, DHS intends to require contractors to identify employees who will require access, to ensure that those employees complete training before they are granted access and annually thereafter, to provide to the government and maintain evidence that training has been conducted. Comments on the proposed rule are due on or before March 20, 2017.

  • Safeguarding of Controlled Unclassified Information

DHS’s third proposed rule will implement new security and privacy measures, including handling and incident reporting requirements, in order to better safeguard CUI. According to DHS, “[r]ecent high-profile breaches of Federal information further demonstrate the need to ensure that information security protections are clearly, effectively, and consistently addressed in contracts.”  Accordingly, the proposed rule – which addresses specific safeguarding requirements outlined in an Office of Management and Budget document outlining policy on managing government data – is intended to “strengthen[] and expand[]” upon existing HSAR language.

DHS’s proposed rule broadly defines “CUI” as “any information the Government creates or possesses, or an entity creates or possesses for or on behalf of the Government (other than classified information) that a law, regulation, or Government-wide policy requires or permits an agency to handle using safeguarding or dissemination controls[,]” including any “such information which, if lost, misused, disclosed, or, without authorization is accessed, or modified, could adversely affect the national or homeland security interest, the conduct of Federal programs, or the privacy of individuals.” The new safeguarding requirements, which apply to both contractors and subcontractors, include mandatory contract clauses; collection, processing, storage, and transmittal guidelines (which incorporate by reference any existing DHS policies and procedures); incident reporting timelines; and inspection provisions. Comments on the proposed rule are due on or before March 20, 2017.

  • Other Recent Efforts To Safeguard Contract Information

DHS’s new rules follow a number of other recent efforts by the federal government to better control CUI and other sensitive government information.

Last fall, for example, the National Archives and Record Administration (“NARA”) issued a final rule standardizing marking and handling requirements for CUI. The final rule, which went into effect on November 14, 2016, clarifies and standardizes the treatment of CUI across the federal government.

NARA’s final rule defines “CUI” as an intermediate level of protected information between classified information and uncontrolled information.  As defined, it includes such broad categories of information as proprietary information, export-controlled information, and certain information relating to legal proceedings.  The final rule also makes an important distinction between two types of systems that process, store or transmit CUI:  (1) information systems “used or operated by an agency or by a contractor of an agency or other organization on behalf of an agency”; and (2) other systems that are not operated on behalf of an agency but that otherwise store, transmit, or process CUI.

Although the final rule directly applies only to federal agencies, it directs agencies to include CUI protection requirements in all federal agreements (including contracts, grants and licenses) that may involve such information.  As a result, its requirements indirectly extend to government contractors.  At the same time, however, it is likely that some government contractor systems will fall into the second category of systems and will not have to abide by the final rule’s restrictions.  A pending FAR case and anticipated forthcoming FAR regulation will further implement this directive for federal contractors.

Similarly, last year the Department of Defense (“DOD”), General Services Administration, and the National Aeronautics and Space Administration issued a new subpart and contract clause (52.204-21) to the FAR “for the basic safeguarding of contractor information systems that process, store, or transmit Federal contract information.”  The provision adds a number of new information security controls with which contractors must comply.

DOD’s final rule imposes a set of fifteen “basic” security controls for covered “contractor information systems” upon which “Federal contract information” transits or resides.  The new controls include: (1) limiting access to the information to authorized users; (2) limiting information system access to the types of transactions and functions that authorized users are permitted to execute; (3) verifying controls on connections to external information systems; (4) imposing controls on information that is posted or processed on publicly accessible information systems; (5) identifying information system users and processes acting on behalf of users or devices; (6) authenticating or verifying the identities of users, processes, and devices before allowing access to an information system; (7) sanitizing or destroying information system media containing Federal contract information before disposal, release, or reuse; (8) limiting physical access to information systems, equipment, and operating environments to authorized individuals; (9) escorting visitors and monitoring visitor activity, maintaining audit logs of physical access, and controlling and managing physical access devices; (10) monitoring, controlling, and protecting organizational communications at external boundaries and key internal boundaries of information systems; (11) implementing sub networks for publically accessible system components that are physically or logically separated from internal networks; (12) identifying, reporting, and correcting information and information system flaws in a timely manner; (13) providing protection from malicious code at appropriate locations within organizational information systems; (14) updating malicious code protection mechanisms when new releases are available; and (15) performing periodic scans of the information system and real-time scans of files from external sources as files are downloaded, opened, or executed.

“Federal contract information” is broadly defined to include any information provided by or generated for the federal government under a government contract.  It does not, however, include either:  (1) information provided by the Government to the public, such as on a website; or (2) simple transactional information, such as that needed to process payments.  A “covered contractor information system” is defined as one that is:  (1) owned or operated by a contractor; and (2) “possesses, stores, or transmits” Federal contract information.

ARTICLE BY Connie N BertramAmy Blackwood & Emilie Adams of Proskauer Rose LLP