“I always feel like somebody’s watching me…” The Legalities of Smart Devices and Privacy

“Hey Alexa…”

It’s a simple phrase that makes us feel like we’re living in the future promised us by The Jetsons and Star Trek. Alexa, Siri, Google Assistant—all Artificial Intelligence (AI) designed to make our lives just a little easier. Need a recipe for beef brisket? Just ask Siri. What time is the movie going to start? Ask Alexa. Need some music for your dinner party? Google Assistant has you covered, just ask. But how are Alexa and Siri at your beck and call? The answer is they’re always listening. What does that mean for you? It means that every sound they hear is analyzed and indexed.

Data Privacy

Privacy is the next big frontier in eDiscovery. Data privacy laws are constantly evolving. The General Data Protection Regulation (GDPR) (effective May 25, 2018) is the European Union (EU) and European Economic Area (EEA) law that relates to data protection and privacy. It also applies to the transfer of personal data outside of the EU and EEA. (The University of Michigan has a great timeline of the history of privacy law.) Practically, your personal data is the most valuable asset you have.

Understanding existing and pending privacy legislation is important. Currently 3 states have passed legislation, including California; 9 states, including Pennsylvania, have active bills; and 15 states have introduced legislation that ultimately died or was postponed. At some point there could be federal legislation that governs privacy similar to GDPR.

Do these devices violate wiretapping laws? Unclear.

An issue worth exploring is whether these devices fall under the purview of wiretapping laws. In Hall-O’Neil v. Amazon, a class action case in the Western District of Washington, Plaintiffs allege that Alexa enabled devices collected and recorded confidential conversations with minors. Hall-O’Neil v. Amazon.com Inc. et al., 2:19CV00910. It is important to keep an eye on these and other similar cases to understand the privacy issues at play with these types of devices.

How Do We Handle Evolving Privacy Issues in the Legal World?

So, what does this mean for legal professionals? One thing to consider is attorney-client privilege issues. With the global pandemic requiring a major shift to working from home you should carefully consider the ramifications of having a virtual assistant in your home while you’re working on client matters—you may be violating attorney-client privilege. Out of an abundance of caution you probably want to unplug your virtual assistant before getting to work.

On the flip side, if someone has a virtual assistant and it was present during a key meeting or event you might want to investigate subpoenaing the recordings, which carries with it additional issues such as who owns the data related to virtual assistants, how long is the data retained, and how do you obtain it.  Law enforcement agencies have been subpoenaing virtual assistant data for years to obtain voice clips and time stamped logs of user activity in crime investigations.

What’s the Best Practice?

With so many questions and so few real legal precedents it’s best to proceed with caution with the use of these devices. It’s also very important, from an eDiscovery perspective, to make sure you’re aware of the potential for important data to be found on these devices during the discovery process.

Article by Gretchen E. Moore, Lydia A. Gorba, Lynne Hewitt and Maryann Mahoney of Strassburger, McKenna Gutnick & Gefsky

For more articles on cybersecurity, please visit here.

©2021 Strassburger McKenna Gutnick & GefskyNational Law Review, Volume XI, Number 244

Privacy Tip #253 – Unemployment Fraud Claims Are Skyrocketing—What to Do if You Are a Victim?

I have received many questions this week on what to do if you are the victim of a fraudulent unemployment claim. It is unbelievable how many people I know who have become victims—yes—including myself.

It is disturbing that all of these fraudulent unemployment claims include the use of our full Social Security numbers. The other disturbing fact is that even if we have a security fraud alert or freeze on our credit accounts, those security freeze or fraud alerts don’t necessarily alert us in the event that a fraudulent unemployment claim is filed in our name.

The Federal Trade Commission (FTC) has recognized this rampant problem and issued tips this week to provide assistance to consumers who have been victimized by these fraudulent unemployment claim scams. The tips can be accessed here.


Copyright © 2020 Robinson & Cole LLP. All rights reserved.
For more articles on privacy, visit the National Law Review Communications, Media & Internet section.

Mama Always Said, ‘Tell the Truth,’ Especially When It Comes to COVID-19

Since the outbreak of the COVID-19 pandemic earlier this year, employers have been placed in the position of having to deal with numerous conflicting legal and moral obligations.  Prior to the pandemic, by virtue of the Americans with Disabilities Act and similar state and local laws, employers were greatly limited in the questions they could ask perspective and current employees about their individual health conditions.  Similarly, unless they were seeking a workplace accommodation, employees did not have to disclose their personal health conditions to their employer.

In the battle to quell the pandemic, the rules have changed significantly.  Employers have greater leeway to ask questions related to the pandemic and employees who may have medical conditions previously unknown to the employer are disclosing them because of their concerns about increased susceptibility to becoming infected by the virus.  At the same time, getting quick and reliable information about an employee’s COVID-19 status may be difficult.  Frequently, an employee will only receive an initial verbal confirmation of a positive test and have to wait days for the written report.  Complicating matters are reports in the media of employees who have falsely told their employer they tested positive.  In some of the reported cases, upon hearing of a positive test, the employer shut down its entire operation for a deep cleaning only to later have the employee retract their statement they were positive.  In some of these falsification incidents, employees are now facing criminal prosecution.  What is an employer to do?

Trust but Verify

The vast majority of employees are honest and deeply concerned about their employer’s response to COVID-19. Therefore, if an employee reports they have tested positive, the employer should not wait for written verification and immediately begin to follow the Centers for Disease Control or local health authority protocols.  At the same time, employers should take all possible steps to verify the accuracy of what the employee is reporting.

In cases of suspected fraud, here are some steps an employer can and should take:

  1. Require the employee to provide written confirmation.  As noted above, employers should understand that a written confirmation of a positive COVID-19 test may not be immediately available to the employee.  Many test sites provide only a verbal response with the written verification following days later.  Employers should still require written confirmation of the verbal positive result.
  2. While waiting for written confirmation of test results, ask the employee specifically where and when they went for testing and verify the accuracy of that information.  In one case reported in the media, a suspicious HR manager determined that the hospital where the employee claimed to have been tested was not even performing COVID-19 tests.
  3. Carefully examine any written documentation provided by the employee.  Doctor’s notes and other non-detailed information can be verified by a Google search to determine that the practitioner is real.  A phone call to that practitioner should be able to easily confirm the truth of the matter on the documentation.
  4. Communicate to employees in advance that falsification of employee records and information, especially something as critical as a positive COVID-19 test, can be grounds for discipline, including termination of employment.

© 2020 Foley & Lardner LLP

For more on employer’ COVID-19 considerations, see the National Law Review Coronavirus News section.

Temperature Checks: Three Things to Know Before Screening Employees and Customers

As businesses begin the calculated process of re-opening their doors to employees and customers, many are considering implementing temperature checks to monitor for at least one known COVID-19 symptom – the fever.

Beyond nailing down the logistics of temperature checks (e.g., who will perform them, has that person been trained, do employees need to be paid while waiting in line, how will social distancing be maintained, etc.) there are several significant legal considerations that should be evaluated before implementation.

The Illinois Biometric Privacy Act

Some temperature screening devices utilize facial-recognition technology to quickly identify those with fever so that they can be promptly tracked down and removed from the facility. While these systems provide logistical advantages, especially to large employers and retailers, they likely implicate provisions of the Illinois Biometric Privacy Act (BIPA) which can lead to costly litigation and result in stiff penalties for anyone who violates the statute, even unwittingly.

According to BIPA, businesses utilizing this type of facial-recognition technology must obtain advance, written consent from the individuals to be scanned, and must also maintain a publicly available policy that specifies information regarding the collection, use, storage, and destruction of individuals’ biometric information. And, again, these policies and consents must be executed and implemented before temperature screenings begin. It is, therefore, critical to determine whether your temperature screening devices perform facial recognition scans or capture other biometric information.

Confidentiality of Employee Information

Employers screening employee temperatures must also remember they are conducting a “medical examination,” as defined by the Equal Employment Opportunity Commission (EEOC) and would be wise to adhere to the EEOC’s guidance on the issue. This means information collected about employees’ temperature, such as the temperature readings themselves, or the fact that an employee had or has a fever, must be treated as confidential medication information and maintained in a confidential file separate from an employee’s personnel file. Employers should also take care to not divulge the identity of any employee sent home with fever, absent consent from the employee to share that information with other personnel, or a strict need-to-know among involved supervisor(s) or members of human resources.

The California Consumer Privacy Act

California’s sweeping new privacy law, the California Consumer Privacy Act (CCPA), contains broad protection of consumers’ “personal information,” and requires businesses subject to the statute to, among other things, notify consumers when their personal information is being collected. Though body temperature is not explicitly mentioned in the statute, the definition of “personal information” is broad, and includes information that “identifies, relates to, describes, is capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer …” It includes biometric information. Whether an individual’s temperature constitutes personal information is up for some debate, but debates often lead to costly litigation, and it is easy enough to amend CCPA notices to include temperature until that debate is resolved in an effort to avoid litigation altogether.

So, if a business is subject to the CCPA and intends to collect employee or customer temperatures (whether or not with the use of biometric technology), it should consider updating its CCPA notices to include “temperature” (and, if applicable, scans of face geometry) to the list of personal information collected.


© 2020 Much Shelist, P.C.

For more employer COVID-19 guidance, see the National Law Review Coronavirus News section.

Good News for Companies: Seventh Circuit Holds Removal of Plaintiffs’ Biometrics Privacy Claims to Federal Court OK

In a widely watched case, the Seventh Circuit decided last week that companies that collect individuals’ biometric data may be able to defend their cases in federal court when plaintiffs allege a procedural violation of Illinois’ Biometric Information Privacy Act (BIPA).

In Bryant v. Compass Group USA, Inc., the Seventh Circuit held that certain procedural violations of Illinois’ BIPA constituted actual injuries and therefore satisfied the requirements for federal court standing. Relying on Spokeo, the seminal U.S. Supreme Court case addressing what constitutes an actual injury for standing purposes, the court held that the plaintiff’s allegations, if proven, would demonstrate that she suffered an actual injury based on the fact that Compass did not obtain her consent before obtaining her private information. Therefore, the case could remain in federal court.

The decision now gives defendants that want to defend BIPA claims in federal court a roadmap for their arguments, including access to a larger jury pool, the Federal Rules of Procedure, and other federal court-related advantages. It is also notable because BIPA defendants have attempted to remove BIPA cases to federal court and then file motions to dismiss them for lack of standing. However, the federal courts have typically remanded these cases, forcing defendants back into state court and sometimes even requiring them to pay just costs and any actual expenses, including attorney fees, incurred as a result of the removal.[1]

What Happened in Bryant v. Compass Group USA

In Compass Group USA, a customer sued a vending machine manufacturer after she scanned her fingerprint into a vending machine to set up an account during her employer’s orientation. She then used her fingerprint to buy items from the vending machine.

The plaintiff filed a putative class action lawsuit on behalf of herself and all other persons similarly situated in state court alleging that Compass violated her statutory rights under BIPA by 1) obtaining her fingerprint without her written consent and 2) not establishing a publicly available data retention schedule or destruction guidelines for possession of biometric data as required by the statute.

Shortly after the plaintiff filed suit in Cook County Circuit Court, Compass filed a notice to remove the case to the Northern District of Illinois. Opposing the motion, the plaintiff argued that she did not have federal standing for her BIPA claims because she had not alleged an injury-in-fact as required by Article III.

Compass argued that the plaintiff had alleged an injury-in-fact under Article III, pointing to the recent Illinois Supreme Court case, Rosenbach v. Six Flags Ent. Corp., which held that plaintiffs can bring BIPA claims based on procedural violations, even if they have suffered no actual injury. Rosenbach held that, if a company, for example, fails to comply with BIPA’s requirement of establishing destruction guidelines for possession of biometric data, that violation alone – without any actual pecuniary or other injury – creates an actual injury.

The district court sided with the plaintiff and concluded that Rosenbach merely established “the policy of the Illinois courts” to allow plaintiffs to bring BIPA claims without alleging an actual injury. Rosenbach did not interpret procedural BIPA violations to be actual injuries.

Because the plaintiff’s claims did not establish Article III standing, the district court granted the plaintiff’s motion to remand the case back to state court.

The Seventh Circuit reversed, relying on Spokeo. It interpreted Spokeo as holding that injuries may still be particularized and concrete – i.e., actual – even if they are intangible or hard to prove. The court also cited Justice Thomas’ concurrence in Spokeo that distinguished between private rights (which courts have historically presumed to cause actual injuries) and public rights (which require a further showing of injury).

The court held that the plaintiff had alleged that she suffered an actual injury when Compass collected her biometric data without obtaining her informed consent because this was a private right. The court also relied on Fed. Election Comm’n v. Atkins, 525 U.S. 11 (1998).  In Atkins, the Supreme Court held that nondisclosure can be an actual injury if plaintiffs can show an impairment of their ability to use information in a way intended by the statute. The court in Compass similarly held that the defendant had denied the plaintiff the opportunity — and statutory right — to consider whether the terms of the defendant’s data collection and usage were acceptable. As a result, the court held that the plaintiff alleged an actual injury.

By contrast, the court determined that the plaintiff’s other claim – that the defendant violated BIPA by failing to make publicly available a data retention schedule and destruction guidelines for possession of biometric data – implicated a public right and did not cause the plaintiff an actual injury.


[1] See, e.g. Mocek v. Allsaints USA Ltd., 220 F. Supp. 3d 910, 914 (N.D. Ill. 2016) (“Defendant’s professed strategy of removing the case on the basis of federal jurisdiction, only to turn around and seek dismissal with prejudice—a remedy not supported by any of defendant’s cases—on the ground that federal jurisdiction was lacking, unnecessarily prolonged the proceedings. . . . For the foregoing reasons, I grant plaintiff’s motion for remand and attorneys’ fees and deny as moot defendant’s motion to dismiss. Because defendant has not objected to the specific fee amount plaintiff claims, which she supports with evidence in the form of affidavits and billing records, I find that plaintiff is entitled to payment in the amount of $58,112.50 pursuant to § 1447(c).”)

© 2020 Schiff Hardin LLP
For more on BIPA, see the National Law Review Communications, Internet, and Media Law section.

FCC Subjects Robocallers and Caller Identification Fraudsters to Increased Penalties and Broader Enforcement

On May 1, 2020, the Federal Communications Commission (FCC) adopted rules to strengthen protections against robocalls and the manipulation of caller identification information to misrepresent the true identity of the caller (known as caller ID spoofing).1 The FCC’s amended rules, which implement portions of the recently-enacted Pallone-Thune Telephone Robocall Abuse Criminal Enforcement and Deterrence Act (TRACED Act), streamline the procedure for commencing enforcement actions against violators and expand the statute of limitations applicable to FCC proceedings against robocallers and caller ID spoofers2 (see GT Alert, TRACED Act Subjects Robocallers to Increased Penalties, Outlines Regulatory and Reporting Requirements to Deter Violations).

The FCC’s changes to its rules include the following:

  • Eliminating the requirement that the FCC issue a citation to a person or entity that violates prohibitions against robocalling before issuing a notice of apparent liability if the person or entity does not hold a license, permit, or other authorization issued by the FCC. As noted by FCC Chairman Ajit Pai in the news release accompanying the FCC’s Order: “Robocall scam operators don’t need a warning these days to know what they are doing is illegal, and this FCC has long disliked the statutory requirement to grant them mulligans.” Caller ID spoofers are already subject to FCC enforcement actions without receiving a citation as a warning.3
  • Increasing the penalty amount to up to $10,000 for each intentional unlawful robocall in addition to the monetary forfeiture permitted under 47 U.S.C. § 503 (for persons or entities that are not FCC licensees or common carriers, the forfeiture penalty shall not exceed $20,489 for each violation and $153,669 for any continuing violation).4 Importantly, each unlawful robocall is considered to be a separate violation, so the potential forfeiture amounts could be very high.
  • Extending the statute of limitations applicable to FCC enforcement actions for intentional robocall violations and for caller ID spoofing violations to four years. Under the amended rule, the FCC may not impose a forfeiture penalty against a person for violations that occurred more than four years prior to the date a notice of apparent liability is issued. The statute of limitations had been one year for all robocall violations and two years for call ID spoofing violations. This change will significantly increase the timeframe of conduct subject to FCC enforcement and that can be included in a proposed forfeiture amount.

Conclusion

The FCC’s amended rules, consistent with the TRACED Act, are intended to discourage unlawful robocalling and caller ID spoofing by abolishing the “one free pass” formerly applicable to entities that do not hold FCC authorizations, increasing the penalties for intentional violations, and expanding the statute of limitations period. This is the FCC’s most recent action to implement the TRACED Act by strengthening protections against unlawful robocalls and caller ID spoofing. Other steps recently taken by the FCC include initiating a rulemaking proceeding to prevent one-ring scams (when a caller initiates a call and allows the call to ring for a short duration with the aim of prompting the called party to return the call and be subject to charges). Given the FCC’s significant focus on combatting illegal robocalling, it is important that companies that rely on robocalls to contact consumers understand the federal laws governing such calls implement procedures to ensure that they comply with those laws and regulations.


1 The Telephone Consumer Protection Act (TCPA) (which was amended by the TRACED Act) and the FCC’s implementing regulations generally prohibit the use of autodialed, prerecorded or artificial voice calls (commonly known as robocalls) to wireless telephone numbers and the use of prerecorded or artificial voice calls to residential telephone numbers unless the caller has received the prior express consent of the called party (certain calls, such as telemarketing calls, require prior express written consent) or is subject to specified exemptions. See 47 U.S.C. § 227; 47 C.F.R. § 64.1200.

2 The FCC issued these rules pursuant to an order, rather than utilizing notice and comment procedures, because the content of the rules did not require the exercise of administrative discretion. The rules will become effective 30 days after the date of publication in the Federal Register.

3 The FCC may issue a forfeiture order if it finds that the recipient of a notice of apparent liability has not adequately responded to the FCC’s allegations. The FCC may also seek to resolve the matter through a consent order which generally requires the alleged violator to make a voluntary payment, develop a compliance plan, and file compliance reports.

4 See 47 U.S.C. § 503(b)(2)(D) as adjusted for inflation. The FCC has authority to make upward or downward adjustments to forfeiture amounts based on several factors. See 47 C.F.R. § 1.80.

©2020 Greenberg Traurig, LLP. All rights reserved.

The Return of Balance and Proportionality

Oscar Wilde was known for saying “Everything in moderation, including moderation.” For a period of time, we were only confronted with the scary aspects of “Big Data.” Think The Great Hack and the testy congressional hearings that we watched.

But the viral pandemic has thrown privacy absolutism into deeper question, as we are suddenly faced with a problem that in order to be solved must involve finding and tracking people for extended periods of time. We need to decide how to balance the societal need for virus control with the societal good of personal privacy.

Contact tracing is often used as an epidemic control measure. Lawmakers have discussed using the tool in the U.S. as Apple and Google work together to develop an effective contract tracing system. It has been deployed against illnesses such as measles, SARs, typhoid, meningococcal disease, and Ebola. It is currently being implemented in South Korea and China to combat COVID-19.

The Israeli government approved tracking cell phone data of people suspected of having coronavirus, to make sure they self-isolated. This emergency power lasted for 30 days. Israel’s Supreme Court, concerned with the privacy implications of using a military technology to track its own citizens’ daily movements, decided that the government would be required to halt this surveillance technology until or unless the government can pass an extension of that use. Then an oversight group in Israel’s parliament blocked an attempt to extend the emergency measures beyond this week, also due to privacy concerns. A committee member said the harm done to privacy outweighed the benefits.

As I recently wrote, this crisis may be testing sensibilities about privacy. Perhaps I was wrong. Sentiments do not seem to be moving aggressively towards greater data collection, or a sacrifice of consumer rights. Instead there appears to be a return towards measuring the weight of data against the potential for abuse, or grand commodification of personal information. In Israel more than 200 people, some identified through phone location information, had been arrested for violating quarantine. Thirty days of these extreme measures were tolerable. Then the Israelis had second thoughts.

Ulrich Kelber, Germany’s federal data protection commissioner, who recently claimed that the lack of GDPR enforcement was a result of enforcement agencies not receiving enough resources, backed a plan for Germany’s disease prevention agency to use Deutsche Telekom metadata. Considering just a week earlier he deemed tracking individual smartphones to monitor quarantine “totally inappropriate and encroaching measure,” it is apparent that Germany is balancing the harsh reality of the crisis and the immediate need for certain information with this encroachment.

Canada’s Privacy Commissioner released a “Framework for the Government of Canada to Assess Privacy-Impactful Initiatives in Response to COVID-19.” The Commissioner’s Office acknowledged that COVID-19 raised “exceptionally difficult challenges to both privacy and public health.” However, the framework reiterated that “the principles of necessity and proportionality, whether in applying existing measures or in deciding on new actions to address the current crisis,” will govern. Canada too is weighing the need of the information collected against the nature and sensitivity of the information collected.

The European Data Protection Board (EDPB) provided multiple guidance documents regarding COVID-19. Much like its Canadian counterpart, guidance provides that the “general principles of effectiveness, necessity, and proportionality must guide any measures adopted by Member States or EU institutions that involve processing of personal data to fight COVID-19.” These guidelines clarify the conditions and principles for the proportionate use of location data and contact tracing tools. But the EDPB also stressed that the “data protection legal framework was designed to be flexible and as such, is able to achieve both an efficient response in limiting the pandemic and protecting fundamental human rights and freedoms.”

Here in the United States, all eyes have been on the California Attorney General regarding enforcement of the California Consumer Privacy Act, which is set to begin on July 1, 2020. Unlike our neighbors to the North and Europe, there is no significant sentiment of the need for balance or proportionality. Just a reminder that as “the health emergency leads more people to look online to work, shop, connect with family and friends, and be entertained, it is more important than ever for consumers to know their rights under the California Consumer Privacy Act.”

For many sovereigns, this crisis has led enforcement agencies and legislatures to return to the roots of data privacy, which is balance and proportionality. Many privacy laws require a balancing test for entities collecting data. COVID-19 has made these principles re-emerge into the limelight.


Copyright © 2020 Womble Bond Dickinson (US) LLP All Rights Reserved.

Make Remote Access for Your Employees Safer & Quicker with Disciplined User Rights

During times of disruption as well as an unpredictable future, your organization’s focus on “the basics” regarding a fundamental remote access strategy and design is essential. The newly widespread remote working environment dictated by various states’ stay at home orders due to the Coronavirus pandemic, demand that successful organizations of tomorrow fully grasp the fundamentals of safe and remote access protocols and prepare for the elastic growth of a disciplined remote access initiative.

The landscape of remote access is forever changed. Regardless of your organization’s existing hardware, software or network (WAN) and cloud design,  basic planning activities – which pave the runway for successful remote access – ensure your organization’s sustainability and enhance your competitiveness in a crowded marketplace.

First and foremost, it’s recommended you audit your current infrastructure design – including a review of your hardware, software, infrastructure, bandwidth, security etc. Any high performing organization’s s remote access strategy should maintain SLAs (Service Level Agreements) or project deadlines and objectives with all internal users and exercise resiliency when confronted with the performance, compliance, and security demands needed to scale.

Three core strategic planning activities are highly recommended prior to, or in parallel with, an audit of your remote access posture:

Clean Up Your Users

Identity hygiene is a constant necessity of any organization to ensure its security stance and guarantee fluidity in the face of dynamic change. Legacy user account cleanup falls into this category, but the lesser practiced aspects of identity hygiene include organization unit restructuring and security group management. These components of a well-tuned identity management infrastructure represent the organizational layout of a business and mapping of processes to business roles which too often grow organically as companies mature. Complacency to organic growth has led many organizations to make drastic and costly decisions to start over rather than re-organize, in order to remove the cancer that has developed in their identity management infrastructure.

Segment User Roles

Likewise, segmenting roles is critical to identity hygiene. Most enterprises have adopted the bifurcation of administrator and personal accounts to ensure audit trails but considerably fewer have aligned security stance to personnel role. As tenure grows and roles change to meet the needs of the organization, new rights and responsibilities are created and added to those individuals with few taken away as the firm’s requirements change. Aligning roles to responsibilities, and more importantly permissions, assures audit compliance without complex explanations and eases transition should those trusted employees ultimately leave the company.

Assign Least Access Rights to Segmented Roles

Finally, the selection of rights assigned to those segmented roles solidifies a corporate identity management strategy. Whether assigned through a workflow engine or maintained through formalized manual processes, assuring least access aligned to each role eliminates the organic growth of unnecessary permissions or access to no longer appropriate applications. This last part is a key facet of a comprehensive strategy that many organizations – including large enterprises – develop complacency around. And the removal of access is no longer strictly necessary. It is too easy to allow excuses that support and even justify this laxity but it’s this very lassitude for least access which opens doors to ransomware propagation, disgruntled and disaffected IT administrators and glaring audit infractions.

In summary, organizational resilience is steeped in discipline. Crisis management and the daily “X factor” can create havoc even with the best laid plans for systems maintenance. The ways in which your firm interacts with clients, partners, suppliers, and others will undoubtedly change with the heavy reliance on remote access capabilities. Those who grasp this concept now will be ahead of the game.

Remote access prowess is now an entry ticket to conducting business post-COVID-19 and absolutely can be viewed now as a true competitive differentiator. When organizations run with elephants there are only two types: 1/ the quick and 2/ the dead. Let’s encourage each other to be in the former category, rather than the latter.


© 2020 Plan B Technologies, Inc.. All Rights Reserved.

For more on remote work considerations during the COVID-19 Pandemic, see the National Law Review Coronavirus News section.

COVID-19 and Cybersecurity: Combating “Zoombombing” and Securing Your Remote Working Videoconferences

As COVID-19 has prompted a massive shift by organizations to the implementation and use of remote working solutions for their employees, there has been an unfortunate, but not surprising, corresponding rise in malicious actors seeking to exploit remote working solutions.

Over the past few weeks, the most notable and prevalent “digital hijacking” has occurred on the Zoom teleconferencing application. Since the start of the COVID-19 pandemic, there has been an explosion in the number of individuals using the Zoom application. Prior to the pandemic, Zoom averaged approximately 10 million users per day. However, Zoom now estimates that approximately 200 million users per day utilize its videoconferencing application. These users not only include remote workers, but also many school children and teachers who utilize the Zoom application for remote learning.

The phenomenon commonly known as “Zoombombing” involves the infiltration of Zoom videoconferences by hackers. Once they have infiltrated a videoconference, hackers have undertaken a variety of malicious acts including, among other things, posting hate speech, stealing personal identifying information, and posting pornography or other offensive or inappropriate content to the other participants in the videoconference. Typically, hackers look to exploit Zoom conference links that are posted publicly and/or open to the public without the need for a password or access key. In response to the increase in Zoombombing attacks, some governments and organizations have restricted or prohibited the use of the Zoom application by their employees. Recognizing the threat that hackers pose to their platform, Zoom recently added new default security features and recommended that users employ additional security safeguards.

Of course, it is not only Zoom that has been targeted by malicious cyber actors. Similar attacks have occurred on numerous other commonly use videoconferencing platforms. Attacks on these other platforms exploit similar flaws or security vulnerabilities that are seen in Zoombombing attacks.

Given the rise of attacks on videoconference applications during the COVID-19 pandemic, the FBI recently issued a warning discussing Zoombombing and other similar attacks aimed at remote working employees and students. The FBI advised that videoconference application users take the following steps:

  • Do not make meetings public and, if the option is available, utilize passwords for access to meetings;
  • Do not share links for meetings publicly;
  • Only allow meeting hosts to have the option to share their screens with other participants;
  • Ensure that you are using the most recent version of the application; and
  • Ensure that your organization’s remote working policies address requirements for videoconferencing security.

Other important security tips include:

  • Ensure that your teleconferencing sessions have active password protections in place;
  • Keep password protection on by default to prevent unauthorized users from joining or hijacking your sessions; and
  • Use a unique, one-time ID number for large or public teleconferencing calls.

The COVID-19 pandemic has made remote working a reality for many in a world handcuffed by social distancing. It is more important now than ever to understand the power, and the corresponding dangers, these new remote connection technologies hold in order to ensure that you maintain the safety and security of your organization’s data and information.


© 2020 Faegre Drinker Biddle & Reath LLP. All Rights Reserved.

For more work from home considerations among the COVID-19 pandemic, see the National Law Review Coronavirus News page.

CARES Act Brings Changes to Federal Substance Use Disorder Privacy Law

The Coronavirus Aid, Relief, and Economic Security Act (CARES Act), enacted March 27, 2020, rewrote significant portions of 42 U.S.C. § 290dd-2, the federal statute governing the confidentiality of substance use disorder (SUD) records that is more commonly known by its implementing regulations at 42 C.F.R. Part 2 (Part 2). Among other changes, the CARES Act revises the permissible uses and disclosures of SUD records to more closely align with the HIPAA Privacy Rule, 45 C.F.R. § 164.500, et seq., when a Part 2 program obtains the patient’s prior written consent.

Historically, Part 2 programs have been restricted in their ability to share SUD records by the Part 2 regulations, which require written patient consent for each disclosure of SUD records and prohibit re-disclosure of such SUD records except in limited circumstances. The CARES Act directs the Secretary of the U.S. Department of Health and Human Services (HHS), in consultation with appropriate federal agencies (which may include the Substance Abuse and Mental Health Services Administration (SAMHSA)) to revise the Part 2 regulations as necessary to implement and enforce the statutory revisions contained in the CARES Act effective March 27, 2021. The forthcoming revisions to the Part 2 regulations may be substantial given these CARES Act changes to the federal statute.

Another significant change to the federal SUD confidentiality statute addresses the ability of health care providers to use SUD records for treatment, payment, and health care operations purposes (except for certain provider fundraising activities) in a manner more consistent with the allowances provided for protected health information under HIPAA. Specifically, the CARES Act authorizes a Covered Entity or Business Associate (as those terms are defined in the HIPAA Privacy Rule) or Part 2 Program (as defined by the Part 2 regulations) to use, disclose, or re-disclose SUD records with the patient’s written consent for treatment, payment, and health care operations as permitted by the HIPAA regulations, 45 C.F.R. Parts 160, 162, and 164, and Sections 13405(a) and (c) of the Health Information Technology and Clinical Health Act (42 U.S.C. § 17935(c)) (HITECH Act). Under the revised statute, a patient can provide written consent once that will then authorize all such future uses or disclosures for purposes of treatment, payment, and health care operations until such time as the patient revokes such consent in writing.

Additionally, the CARES Act incorporates the following privacy protections for SUD records:

  • Except as otherwise authorized by court order or by written patient consent, SUD records or testimony relaying information from the SUD records may not be disclosed or used in any civil, criminal, administrative, or legislative proceedings conducted by any federal, state, or local authority.
  • Penalties applicable to HIPAA violations (42 U.S.C. §§ 1320d-5 and 6) shall apply to a violation of 42 U.S.C. § 290dd-2.
  • The breach notification provisions of Section 13402 of the HITECH Act shall apply to SUD records.
  • By March 27, 2021, HHS will update the HIPAA Privacy Rule to require that Part 2 programs provide notice of privacy practices, written in plain language, describing the patient’s rights with respect to the Part 2 records and how the patient may exercise those rights, and describing each purpose for which the Part 2 program is permitted or required to use or disclose the SUD records without the patient’s written authorization.
  • Part 2 providers can disclose information, regardless of whether the patient gives written consent, to a public health authority (as defined by HIPAA), if the content is de-identified in accordance with the HIPAA de-identification standards set forth at 45 C.F.R. § 164.514(b).
  • Patients shall have the right to request a restriction on the use or disclosure of SUD records for treatment, payment, or health care operations.
  • Patients shall have the right to request an accounting of disclosures of SUD records consistent with the HITECH Act and HIPAA.
  • Entities shall be prohibited from discriminating against an individual on the basis of information received, whether intentionally or inadvertently, from SUD records in: (a) admission, access to, or treatment for health care; (b) hiring, firing, or terms of employment, or receipt of worker’s compensation; (c) the sale, rental, or continued rental of housing; (d) access to federal, state, or local courts; or (e) access to, approval of, or maintenance of social services and benefits provided or funded by federal, state, or local governments.
  • Recipients of federal funds shall be prohibited from discriminating against an individual on the basis of information received, whether intentionally or inadvertently, from SUD records, when offering access to services provided with such funds.

The CARES Act provides that the above-summarized amendments to the federal SUD statute will apply to uses and disclosures of information on or after March 27, 2021. While these changes implement long-awaited alignment efforts to enable data sharing across providers in a manner consistent with the allowances permitted under HIPAA, the real impact of these changes will come from the forthcoming implementing agency regulations from, which are also due to be issued by March 27, 2021.


©2020 Greenberg Traurig, LLP. All rights reserved.