TCPA Turnstile: 2022 Year in Review (TCPA Case Update Vol. 17)

As 2022 comes to a close, we wanted to look back at the most significant Telephone Consumer Protection Act, 47 U.S.C. § 227 (“TCPA”) decisions of the year.  While we didn’t see the types of landscape-altering decisions that we saw in 2021, there’s still plenty to take note of.  We summarize here the biggest developments since our last update, listed by issue category in alphabetical order.

Arbitration: In Kelly v. McClatchy Co., LLC, 2022 WL 1693339 (E.D. Cal.  May 26, 2022), the District Court denied the defendant’s motion to compel arbitration because the contractual relationship between the parties had terminated before the unwanted calls were made. Plaintiffs had originally signed defendant’s Terms of Service which bound them to an arbitration provision for all legal disputes. Plaintiffs then cancelled their subscriptions which subsequently ended the enforceability of the Terms of Service against them. However, plaintiffs then received unwanted calls from Defendant seeking service renewals which the court deemed were not covered by the arbitration clause, even under a theory of post-expiration enforcement.

ATDS: Following Facebook v. Duguid, 141 S. Ct. 1163 (2021), courts are still struggling to define an “automatic telephone dialing system,” and the Third Circuit weighed in through Panzarella v. Navient Sols., Inc., 2022 WL 2127220 (3d Cir. June 14, 2022).  The district court granted defendant’s motion for summary judgment on the grounds that plaintiffs failed to show that an ATDS was used to call their phones. The Third Circuit upheld the summary judgment ruling but did not decide whether the dialing equipment used constituted an “ATDS” under the TCPA. Rather, its ruling hinged on the fact that defendant’s dialer pulled phone numbers from its internal database, not computer-generated tables. As such, the Third Circuit found that even though the system may very well be an unlawful ATDS system under the TCPA, if it is not used in that way, defendants could not be held liable.

In an interesting move, the court in Jiminez v. Credit One Bank, N.A., Nco Fin. Sys., 2022 WL 4611924 (S.D.N.Y. Sept. 30, 2022), narrowed the definition of an “ATDS,” choosing to reject the Second Circuit approach in favor of the Third Circuit’s approach in Panzarella. Here, plaintiff alleged that defendant used a dialing system to send numerous calls without consent. The Second Circuit follows the majority view that, if a system used to dial numbers has the ability to store or generate random numbers, the call made violates the TCPA, even if the random dialing function is not actually utilized. But the court in Jiminez found the Third Circuit’s reasoning persuasive and applied it to the case, finding that plaintiff failed to show the dialing system was actually used in a way that violated the TCPA. It granted summary judgment to defendants on the TCPA claims because the evidence showed the numbers used were all taken from a pre-approved customer list, not generated from random dialing.

Similarly, in Borden v. Efinancial, LLC, 2022 WL 16955661 (9th Cir. Nov. 16, 2022), the Ninth Circuit also adopted a narrower definition of an ATDS, finding that to qualify as an ATDS, a dialing system must use its automation function generate and dial random or sequential telephone numbers. This means that a mere ability to generate random or sequential numbers is irrelevant, the generated numbers must actually be telephone numbers. Given the circuit split on this issue, it seems likely that the Supreme Court will eventually have to weigh in.

Notably, in May 2022, the FCC issued a new order which will target unlawful robocalls originating outside the country. The order creates a new classification of service providers called “Gateway Providers” which have traditionally served a transmitters of international robocalls. These providers are domestic intermediaries which are now required to register with the FCC’s Robocall Mitigation database, file a mitigation plan with the agency, and certify compliance with the practices therein.

Class Certification: In Drazen v. Pinto, 41 F. 4th 1354 (11th Cir. July 27, 2022), the Eleventh Circuit considered the issue of standing in a TCPA class action. Plaintiffs’ proposed settlement class included unnamed plaintiffs who had only received one unsolicited text message. Because the court held in an earlier case (Salcedo v. Hanna, 936 F.3d 1162 (11th Cir. 2019)) that just one unwanted message is not sufficient to satisfy Article III standing, it found that some of the class members did not have adequate standing. The district court approved the class with these members in it, finding that those members could remain because they had standing in their respective Circuit and only named plaintiffs needed to have standing. The Eleventh Circuit held otherwise and vacated the class certification and settlement in the case. It remanded, allowing for redefinition of the class giving all members standing.

Consent: Chennette v. Porch, 2022 WL 6884084 (9th Cir. Oct. 12, 2022), involved a defendant who used cell phone numbers posted on publicly available websites, like Yelp and Facebook, to solicit client leads to contractors through unwanted text messages. The court rejected defendant’s argument that plaintiffs consented to the calls because their businesses were advertised through these public posts with the intent of obtaining new business. Beyond that, the court also found that even though these cell phones were used for both personal and business purposes, the numbers still fell within the protection of the TCPA, allowing plaintiffs to satisfy both statutory and Article III standing.

Damages: In Wakefield v. ViSalus, 2022 WL 11530386 (9th Cir. Oct. 20, 2022), the Ninth Circuit adopted a new test to determine the constitutionality of an exceptionally large damages award. Defendant was a marketing company that made unwanted calls to former customers, soliciting them to renew their subscriptions to weigh-loss products. After a multi-day trial, a jury returned a verdict for the plaintiff with a statutory damages award of almost $1 billion. The Ninth Circuit reversed and remanded to the district court to consider the constitutionality of the award. While the district court’s test asked whether the award was “so severe and oppressive” as to violate defendant’s due process rights, the Ninth Circuit instructed it to reassess using a test outlined in a different case, Six Mexican Workers. The Six Mexican Workers test assesses the following factors in determining the constitutionality of the damages award: “1) the amount of award to each plaintiff, 2) the total award, 3) the nature and persistence of the violations, 4) the extent of the defendant’s culpability, 5) damage awards in similar cases, 6) the substantive or technical nature of the violations, and 7) the circumstances of each .” We are still awaiting that determination on remand.

Standing: In Hall v. Smosh Dot Com, Inc., 2022 WL 2704571 (E.D. Cal July 12, 2022), the court addressed whether plaintiff had standing under the TCPA as a cell phone plan subscriber where the text messages were only received by someone else on the plan; in this case, plaintiff was the subscriber and her minor son was the recipient of the unwanted text messages. The court granted defendant’s motion to dismiss for lack of standing because she could not show that status of a subscriber alone could convey adequate standing under Article III.

In Rombough v. State Farm, No. 22-CV-15-CJW-MAR, (N.D. Iowa June 9, 2022), the court evaluated standing under the TCPA based on a plaintiff’s number being listed on the Do Not Call list. It determined that being on the DNC was not an easy ticket into court, plaintiff needed to allege more than just having its number on the list. Rather, the plaintiff need have actually registered their own numbers on the list.

© 2022 Vedder Price
For more Cybersecurity and Privacy Law news, click here to visit the National Law Review.

Ankura CTIX FLASH Update – December 13, 2022

Malware Activity

Uber Discloses New Data Breach Related to Third-Party Vendor

Uber has disclosed a new data breach that is related to the security breach of Teqtivity, a third-party vendor that Uber uses for asset management and tracking services. A threat actor named “UberLeaks” began leaking allegedly stolen data from Uber and Uber Eats on December 10, 2022, on a hacking forum. The exposed data includes Windows domain login names and email addresses, corporate reports, IT asset management information, data destruction reports, multiple archives of apparent source code associated with mobile device management (MDM) platforms, and more. One document in particular contained over 77,000 Uber employee email addresses and Windows Active Directory information. UberLeaks posted the alleged stolen information in four (4) separate postings regarding Uber MDM, Uber Eats MDM, Teqtivity MDM, and TripActions MDM platforms. The actor included one (1) member of the Lapsus$ threat group in each post, but Uber confirmed that Lapsus$ is not related to this December breach despite being previously linked to the company’s cyberattack in September 2022. Uber confirmed that this breach is not related to the security incident that took place in September and that the code identified is not owned by Uber. Teqtivity published a data breach notification on December 12, 2022, that stated the company is aware of “customer data that was compromised due to unauthorized access to our systems by a malicious third party” and that the third-party obtained access to its AWS backup server that housed company code and data files. Teqtivity also noted that its ongoing investigation identified the following exposed information: first name, last name, work email address, work location details, device serial number, device make, device model, and technical specs. The company confirmed that home address, banking information, and government identification numbers are not collected or retained. Uber and Teqtivity are both in the midst of ongoing investigations into this data breach. CTIX analysts will provide updates on the matter once available.

Threat Actor Activity

PLAY Ransomware Claims Responsibility for Antwerp Cyberattack

After last week’s ransomware attack on the city of Antwerp, a threat organization has claimed responsibility and has begun making demands. The threat group, tracked as PLAY ransomware, is an up-and-coming ransomware operation that has been posting leaked information since November 2022, according to an available posting on their leak site. Samples of the threat group’s ransomware variants have shown activity dating back to June 2022, which is around the time PLAY ransomware targeted the Argentina Court of Cordoba (August). While PLAY’s ransomware attack crippled several sectors of Antwerp, it appears to have had a significant impact on residential facilities throughout the city, as stated by officials. According to PLAY NEWS, PLAY’s ransomware leak site, the publication date for the exfiltrated data is Monday, December 19, 2022, if the undisclosed ransom is not paid. PLAY threat actors claim to have 557 gigabytes (GB) worth of Antwerp-related data including but not limited to personal identifiable information, passports, identification cards, and financial documents. CTIX continues to monitor the developing situation and will provide additional updates as more information is released.

Vulnerabilities

Fortinet Patches Critical RCE Vulnerability in FortiOS SSL-VPN Products

After observing active exploitation attempts in-the-wild, the network security solutions manufacturer Fortinet has patched a critical vulnerability affecting their FortiOS SSL-VPN products. The flaw, tracked as CVE-2022-42475, was given a CVSS score of 9.3/10 and is a heap-based buffer overflow, which could allow unauthenticated attackers to perform arbitrary remote code execution (RCE) if successfully exploited. Specifically, the vulnerability exists within the FortiOS sslvpnd product, which enables individual users to safely access an organization’s network, client-server applications, and internal network utilities and directories without the need for specialized software. The vulnerability was first discovered by researchers from the French cybersecurity firm Olympe Cyberdefense who warned users to monitor their logs for suspicious activity until a patch was released. Although very few technical details about the exploitation have been divulged, Fortinet did share lists of suspicious artifacts and IPs. Based on research by Ankura CTIX analysts, the IPs released by Fortinet are located around the globe and are not associated with known threat actors at this time. To prevent exploitation, all Fortinet administrators leveraging FortiOS sslvpnd should ensure that they download and install the latest patch. If organizations cannot immediately patch their systems due to the business interruption it would cause, Olympe Cyberdefense suggests “customers monitor logs, disable the VPN-SSL functionality, and create access rules to limit connections from specific IP addresses.” A list of the affected products and their solutions, as well as the indicators of compromise can be found in the Fortinet advisory linked below.

The semi-weekly Ankura Cyber Threat Investigations and Expert Services (CTIX) FLASH Update is designed to provide timely and relevant cyber intelligence pertaining to current or emerging cyber events. The preceding is a collection of cyber threat intelligence leads assembled over the past few days and typically includes high level intelligence pertaining to recent threat group/actor activity and newly identified vulnerabilities impacting a wide range of industries and victims. 

Copyright © 2022 Ankura Consulting Group, LLC. All rights reserved.

How Many Websites Now Have Cookie Banners?

A “cookie banner” refers to a pop-up notice on a website that discusses the site’s use of cookies. There is little standardization concerning how cookie banners are deployed. For example, websites can position them in different places on the screen (e.g., across the top of the screen, across the bottom of the screen, in a corner of the screen, or centered on the screen). Cookie banners also utilize different language to describe what cookies are and use different terms to describe options consumers may have in relation to the deployment of cookies. Some cookie banners require that a consumer interact with the banner (e.g., accept, cancel, or click out of) before the consumer can visit a website; other cookie banners are designed to disappear from view after several seconds.

As of October 2022, 45% of Fortune 500 websites were utilizing a cookie banner.[1] That represents an 11-point increase since 2021.[2]


[1] Greenberg Traurig LLP reviewed the publicly available privacy notices and practices of 555 companies (the Survey Population). The Survey Population comprises companies that had been ranked within the Fortune 500 at some point in the past five years as well as additional companies selected from industries that are underrepresented in the Fortune 500. While the Survey Population does not fully match the current Fortune 500 as a result of industry consolidation and shifts in company capitalization, we believe that the aggregate statistics rendered from the Survey Population are representative of mature companies. Greenberg Traurig’s latest survey was conducted between September and October 2022.

[2] Greenberg Traurig LLP conducted a survey in December 2020 which showed that 34.2% of websites had cookie banners.

©2022 Greenberg Traurig, LLP. All rights reserved.

Privacy Rights in a Remote Work World: Can My Employer Monitor My Activity?

The rise in remote work has brought with it a rise in employee monitoring.  Between 2019 and 2021, the percentage of employees working primarily from home tripled.  As “productivity paranoia” crept in, employers steadily adopted employee surveillance technologies.  This has raised questions about the legal and ethical implications of enhanced monitoring, in some cases prompting proposed legislation or the expanded use of laws already on the books.

Employee monitoring is nothing new.  Employers have long used supervisors and timeclock programs, among other systems, to monitor employee activity.  What is new, however, is the proliferation of sophisticated monitoring technologies—as well as the expanding number and variety of companies that are employing them.

 While surveillance was once largely confined to lower-wage industries, white-collar employers are increasingly using surveillance technologies to track their employees’ activity and productivity.  Since the COVID-19 pandemic started in March 2020, one in three medium-to-large companies has adopted some form of employee monitoring, with the total fraction of employers using surveillance technologies closer to two in three.  Workers who are now subject to monitoring technologies include doctors, lawyers, academics, and even hospice chaplains.  Employee monitoring technologies can track a range of information, including:

  • Internet use (e.g., which websites and apps an employee has visited and for how long);

  • How long a computer sits idle;

  • How many keystrokes an employee types per hour;

  • Emails that are sent or received from a work or personal email address (if the employee is logged into a personal account on a work computer);

  • Screenshots of a computer’s display; and

  • Webcam photos of the employee throughout the day.

These new technologies, coupled with the shift to remote work, have blurred the line between the professional and the personal, the public and the private.  In the face of increased monitoring, this blog explores federal and state privacy regulations and protections for employees.

What are the legal limitations on employee monitoring?

 There are two primary sources of restrictions on employee monitoring: (1) the Electronic Communications Privacy Act of 1986 (ECPA), 18 U.S.C. §§ 2510 et seq.; and (2) common-law protections against invasions of privacy.  The ECPA is the only federal law that regulates the monitoring of electronic communications in the workplace.  It extends the Federal Wiretap Act’s prohibition on the unauthorized interception of communications, which was initially limited to oral and wire communications, to cover electronic communications like email.  As relevant here, the ECPA contains two major exceptions.  The first exception, known as the business purpose exception, allows employers to monitor employee communications if they can show that there is a legitimate business purpose for doing so.  The second exception, known as the consent exception, permits employers to monitor employee communications so long as they have consent to do so.  Notably, this exception is not limited to business communications, allowing employers to monitor employees’ personal communications if they have the requisite consent.  Together, the business purpose and consent exceptions significantly limit the force of the ECPA, such that, standing alone, it permits most forms of employee monitoring.

In addition to the ECPA’s limited protections from surveillance, however, some states have adopted additional protections of employee privacy.  Several state constitutions, including those of California, South Carolina, Florida, and Louisiana, guarantee citizens a right to privacy.  While these provisions do not directly regulate employers’ activity, they may bolster employees’ claims to an expectation of privacy.  Other states have enacted legislation that limits an employer’s ability to monitor employees’ social media accounts.  Virginia, for example, prohibits employers from requiring employees to disclose their social media usernames or passwords.  And a few states have enacted laws to bolster employees’ access to their data.  For example, the California Privacy Rights Act (CPRA), which comes into full effect on January 1, 2023, and replaces the California Consumer Privacy Act (CCPA), will provide employees with the right to access, delete, or opt-out of the sale of their personal information, including data collected through employee monitoring programs.  Employees will also have the right to know where, when, and how employers are using their data.  The CPRA’s protections are limited, however.  Employers will still be able to use surveillance technologies, and to make employment decisions based on the data these technologies gather.

Finally, several states require employers to provide notice to employees before monitoring or intercepting electronic communications.  New York recently adopted a law,  Senate Bill (SB) S2628, that requires all private-sector employers to provide notice of any electronic monitoring to employees (1) upon hiring, via written or electronic employee acknowledgment; and (2) in general, in a “conspicuous place” in the workplace viewable to all employees.  The new law is aimed at the forms of monitoring that have proliferated since the shift to remote work, and covers surveillance technologies that target the activities or communications of individual employees.  Delaware and Connecticut also have privacy laws that predate SB S2628.  Delaware requires notice to employees upon hire that they will be monitored, but does not require notice within the workplace.  Meanwhile, Connecticut requires notice of monitoring to be conspicuously displayed in the workplace but does not require written notice to employees upon hire.  Accordingly, in many states, employee privacy protections exceed the minimum standard of the ECPA, though they still are not robust.

How does employee monitoring intersect with other legal rights?

Other legal protections further limit employee monitoring.

First, in at least some jurisdictions, employees who access personal emails on their work computer, or conduct other business that would be protected under attorney-client privilege, maintain their right to privacy for those communications.  In Stengart v. Loving Care Agency, Inc., 408 N.J. Super. 54 (App. Div. 2009), the Superior Court of New Jersey, Appellate Division, considered a case in which an employee had accessed her personal email account on her employer’s computer and exchanged emails from that account with her attorney regarding a possible employment case against her employer.  The employer, who had installed an employee monitoring program, was able to access and read the employee’s emails.  The Court held that the employee still had a reasonable expectation of privacy and that sending and receiving emails on a company-issued laptop did not waive the attorney-client privilege.  The Court thus required the employer to turn over all emails between the employee and her attorney that were in its possession and directed the employer to delete all of these emails from its hard drives.  Moving forward, the Court instructed that, while “an employer may trespass to some degree into an employee’s privacy when buttressed by a legitimate business interest,” such a business interest held “little force . . . when offered as the basis for an intrusion into communications otherwise shielded by the attorney-client privilege.”  Stengart, 408 N.J. Super. at 74.

Second, employee monitoring can run afoul of protections related to union and other concerted activity.  The General Counsel for the National Labor Relations Board (NLRB) recently announced a plan to curtail workplace surveillance technologies.  Existing law prohibits employers from using surveillance technologies to monitor or record union activity, such as by recording employees engaged in picketing, or otherwise interfering with employees’ rights to engage in concerted activity.  The General Counsel’s plan outlines a new, formal framework for analyzing whether employee monitoring interferes with union or concerted activity.  Under this framework, an employer presumptively violates Section 7 or Section 8 of the National Labor Relations Act (NLRA) where their “surveillance and management practices, viewed as a whole, would tend to interfere with or prevent a reasonable employee from engaging in” protected activities.  Examples of technologies that are presumptively violative include key loggers, webcam photos, and audio recordings.

Do I have a claim against my employer?

While federal and state restrictions on employee monitoring are limited, you may have a legal claim against your employer if its monitoring is overly intrusive or it mishandles your personal data.  First, an invasion-of-privacy claim, for the tort of intrusion upon seclusion, could exist if your employer monitors your activity in a way that would be highly offensive to a reasonable person, such as by accessing your work laptop’s webcam or internal microphone and listening in on private affairs in your home.  Second, you may have a claim against your employer for violating its legal duty to protect your personal information if data it collects in the course of monitoring your work activity is compromised.  In Dittman v. UPMC, 196 A.3d 1036 (Pa. 2018), employees at the University of Pittsburgh Medical Center and UPMC McKeesport (collectively, UPMC) filed a class-action complaint alleging that UPMC breached its legal duty of reasonable care when it failed to protect employees’ data, which was stolen from UPMC computers.  The Pennsylvania Supreme Court found for the plaintiffs, holding that employers have an affirmative duty to protect the personal information of their employees.  Because the Pennsylvania Supreme Court’s holding was grounded in tort principles that are recognized by many states (i.e., duty of care and negligence), it may pave a path for future cases in other jurisdictions.  Third, if any medical information is accessed and improperly used by your employer, you may have a claim under the Americans with Disabilities Act, which requires that employers keep all employee medical information confidential and separate from all other personnel information.  See 42 U.S.C. § 12112(d)(3)(B)-(C), (4)(B)-(C).

Conclusion

Employees are monitored more consistently and in more ways than ever before. By and large, employee monitoring is legal.  Employers can monitor your keystrokes, emails, and internet activity, among other metrics.  While federal regulation of employee monitoring is limited, some states offer additional protections of employee privacy.  Most notably, employers are increasingly required to inform employees that their activity will be monitored.  Moreover, other legal rights, such as the right to engage in concerted activity and to have your medical information kept confidential, provide checks on employee surveillance.  As employee monitoring becomes more commonplace, restrictions on surveillance technologies and avenues for legal recourse may also grow.

Katz Banks Kumin LLP Copyright ©

New York Enacts Crypto Mining Moratorium

On November 22, 2022, New York Governor Kathy Hochul signed into law a two-year moratorium against granting permits to crypto mining operations that “are operated through electric generating facilities that use a carbon-based fuel.” Renewable sources of energy are not impacted.

The legislation, among the first of its kind in the nation, prohibits the state’s Department of Environmental Conservation from issuing any new or renewal permits to electricity generating facilities reliant on carbon-based fuel supporting crypto mining operations that use proof-of-work authentication methods to validate blockchain transactions. The law applies to all permits and renewal applications filed after its effective date, and therefore grandfathers certain businesses that held permits prior to the date of enactment. The Department of Environmental Conservation and the Department of Public Service are also tasked under the legislation with preparing an environmental impact statement on cryptocurrency mining operations that use proof-of-work authentication techniques.

For more Environmental Law news, click here to visit the National Law Review.

Copyright © 2022, Hunton Andrews Kurth LLP. All Rights Reserved.

ANOTHER TRILLION DOLLAR CASE:? TikTok Hit in MASSIVE CIPA Suit Over Its Business Model of Profiting from Advertising by Collecting and Monetizing User Data

Data privacy lawsuits are EXPLODING and one of our country’s most popular mobile app — TikTok’s privacy issues keep piling up.

Following its recent $92 million class-action data privacy settlement for its alleged violation of Illinois Biometric Information Privacy Act (BIPA), TikTok is now facing a CIPA and Federal Wire Tap class action for collecting users’ data via its in-app browser without Plaintiff and class member’s consent.

The complaint alleges “[n]owhere in [Tik Tok’s] Terms of Service or the privacy policies is it disclosed that Defendants compel their users to use an in-app browser that installs JavaScipt code into the external websites that users visit from the TikTok app which then provides TikTok with a complete record of every keystroke, every tap on any button, link, image or other component on any website, and details about the elements the users clicked. “

Despite being a free app, TikTok makes billions in revenue by collecting users’ data without their consent.

The world’s most valuable resource is no longer oil, but data.”

While we’ve discussed before, many companies do collect data for legitimate purposes with consent. However this new complaint alleges a very specific type of data collection practice without the TikTok user’s OR the third party website operator’s consent.

TikTok allegedly relies on selling digital advertising spots for income and the algorithm used to determine what advertisements to display on a user’s home page, utilizes tracking software to understand a users’ interest and habits. In order to drive this business, TikTok presents users with links to third-party websites in TikTok’s in-app browser without a user  (or the third party website operator) knowing this is occurring via TikTok’s in-app browser. The user’s keystrokes is simultaneously being intercepted and recorded.

Specifically, when a user attempts to access a website, by clicking a link while using the TikTok app, the website does not open via the default browser.  Instead, unbeknownst to the user, the link is opened inside the TikTok app, in [Tik Tok’s] in-app browser.  Thus, the user views the third-party website without leaving the TikTok app. “

The Tik-Tok in-app browser does not just track purchase information, it allegedly tracks detailed private and sensitive information – including information about  a person’s physical and mental health.

For example, health providers and pharmacies, such as Planned Parenthood, have a digital presence on TikTok, with videos that appear on users’ feeds.

Once a user clicks on this link, they are directed to Planned Parenthood’s main webpage via TikTok’s in-app browser. While the user is assured that his or her information is “privacy and anonymous,” TikTok is allegedly intercepting it and monetizing it to send targeted advertisements to the user – without the user’s or Planned Parenthood’s consent.

The complaint not only details out the global privacy concerns regarding TikTok’s privacy practices (including FTC investigations, outright ban preventing U.S. military from using it, TikTok’s BIPA lawsuit, and an uptick in privacy advocate concerns) it also specifically calls out the concerns around collecting reproductive health information after the demise of Roe v. Wade this year:

TikTok’s acquisition of this sensitive information is especially concerning given the Supreme Court’s recent reversal of Roe v. Wade and the subsequent criminalization of abortion in several states.  Almost immediately after the precedent-overturning decision was issued, anxieties arose regarding data privacy in the context of commonly used period and ovulation tracking apps.  The potential of governments to acquire digital data to support prosecution cases for abortions was quickly flagged as a well-founded concern.”

Esh. The allegations are alarming and the 76 page complaint can be read here: TikTok.

In any event, the class is alleged as:

“Nationwide Class: All natural persons in the United State whose used the TikTok app to visit websites external to the app, via the in-app browser.

California Subclass: All natural persons residing in California whose used the TikTok app to visit websites external to the app, via the in-app browser.”

The complaint alleges California law applies to all class members – like the Meta CIPA complaint we will have to wait and see how a nationwide class can be brought related to a CA statute.

On the CIPA claim, the Plaintiff – Austin Recht – seeks an unspecific amount of damages for the class but the demand is $5,000 per violation or 3x the amount of damages sustained by Plaintiff and the class in an amount to be proven at trial.

We’ll obviously continue to keep an eye out on this.

Article By Puja J. Amin of Troutman Firm

For more communications and media legal news, click here to visit the National Law Review.

© 2022 Troutman Firm

CMS Issues Calendar Year 2023 Home Health Final Rule

On November 4, 2022, the Centers for Medicare & Medicaid Services (CMS) published the calendar year 2023 Home Health Prospective Payment System Rate final rule, which updates Medicare payment policies and rates for home health agencies.  Some of the key changes implemented by the final rule are summarized below.

  • Home Health Payment Rates. Instead of imposing a significant rate cut, as was included in the proposed rule released earlier this year, CMS has increased calendar year 2023 Medicare payments to home health agencies by 0.7 percent or $125 million in comparison to calendar year 2022.

 

  • Patient-Driven Groupings Model and Behavioral Changes. A -3.925 percent permanent adjustment to the 30-day payment rate has been implemented for calendar year 2023. The purpose of this adjustment is to ensure that aggregate expenditures under the new patient-driven groupings model payment system are equal to what they would have been under the old payment system. Additional permanent adjustments are expected to be proposed in future rulemaking.

 

  • Permanent Cap on Wage Index Decreases. The rule finalizes a permanent 5 percent cap on negative wage index changes for home health agencies.

 

  • Recalibration of Patient-Driven Groupings Model Case-Mix Weights. CMS has finalized the recalibration of the case-mix weights, including the functional levels and co-morbidity adjustment subgroups and the low utilization payment adjustment thresholds, using calendar year 2021 data in an effort to more accurately pay for the types of patients home health agencies are serving.

 

  • Telehealth. CMS plans to begin collecting data on the use of telecommunications technology under the home health benefit on a voluntary basis beginning on January 1, 2023, and on a mandatory basis beginning on July 1, 2023. Further program instruction for reporting this information on home health claims is expected to be issued in January of 2023.

 

  • Home Infusion Therapy Benefit. The Consumer Price Index for all urban consumers for June 2022 is 9.1 percent and the corresponding productivity adjustment is a reduction of 0.4 percent. Therefore, the final home infusion therapy payment rate update for calendar year 2023 is an increase of 8.7 percent. The standardization factor, the final geographic adjustment factors, national home infusion therapy payment rates, and locality-adjusted home infusion therapy payment rates will be posted on CMS’ Home Infusion Therapy Services webpage once the rates are finalized.

 

  • Finalization of All-Payer Policy for the Home Health Quality Reporting Program. CMS has ended the temporary suspension of Outcome and Assessment Information Set (OASIS) data collection on non-Medicare/non-Medicaid home health agency patients. Beginning in calendar year 2027, home health agencies will be required to submit all-payer OASIS data, with two quarters of data required for program year 2027. A phase-in period will occur from January 1, 2025 through June 30, 2025, and during that time the failure to submit the data will not result in a penalty.

 

  • Health Equity Request for Information. The comments received from stakeholders providing feedback on health equity measure development for the Home Health Quality Reporting Program and the potential future application of health equity in the Home Health Value-Based Purchasing Expanded Model’s scoring and payment methodologies are summarized in the final rule.

 

  • Baseline Years in the Expanded Home Health Value-Based Purchasing (HHVBP) Model. For the Expanded Home Health Value-Based Purchasing Expanded Model, CMS is: updating definitions, changing the home health agency baseline calendar year (from 2019 to 2022 for existing home health agencies with a Medicare certification date prior to January 1, 2019, and from 2021 to 2022 for home health agencies with a Medicare certification date prior to January 1, 2022); and changing the model baseline calendar year from 2019 to 2022 starting in 2023.

For more Health Care legal news, click here to visit the National Law Review.

Copyright © 2022 Robinson & Cole LLP. All rights reserved.

Dead Canary in the LBRY

In a case watched by companies that offered and sold digital assets1 Federal District Court Judge Paul Barbadoro recently granted summary judgment for the Securities and Exchange Commission (“SEC”) against LBRY, Inc.2 This case is seen by some as a canary in the coalmine in that the decision supports the SEC’s view espoused by SEC Chairman Gary Gensler that nearly all digital assets are securities that were offered and sold in violation of the securities laws.3 For FinTech companies hoping to avoid SEC enforcement actions, the LBRY decision strongly suggests that all companies offering digital assets could be viewed by courts as satisfying the Howey test for investment contract securities.4

LBRY is a company that promised to use blockchain technology to allow users to share videos and images without the need for third-party intermediaries like YouTube or Facebook. LBRY offered and sold LBRY Credits, called LBC tokens, that would compensate participants of their blockchain network and would be spent by LBRY users on things like publishing content, tipping content creators, and purchasing paywall content. At launch, LBRY had pre-mined 400 million LBC for itself, and approximately 600 million LBC would be available in the future to compensate miners. LBRY spent about half of the 400 million LBC tokens on various endeavors, such as direct sales and using the tokens to incentivize software developers and software testers.

Judge Barbadoro concluded as a matter of law (i.e., that no reasonable jury could conclude otherwise) that the LBC tokens were securities under Section 5 of the Securities Act. Applying the Howey test, Judge Barbadoro noted the only prong of the Howey test that was disputed in the case was: Did investors buy LBC tokens “with an expectation of profits to be derived solely from the efforts of the promoter or a third party”? Judge Barbadoro answered resoundingly, “Yes.”

Most important to his conclusion that investors purchased LBC tokens with the expectations of profits solely through the efforts of the promoter (i.e., LBRY) were: the many statements made by LBRY employees and community representatives about the price of LBC and trading volume of LBC; and many statements that LBRY made about the development of its content platform, including how the platform would yield long-term value to LBC holders. Critically, however, Judge Barbadoro found that even if LBRY had made none of these statements, the LBC token would still constitute a security because “any reasonable investor who was familiar with the company’s business model would have understood the connection” between LBC value growth and LBRY’s efforts to grow the use of its network. Even if LBRY had never said a word about the LBC token, Judge Barbadoro found that the LBC token would constitute a security because LBRY retained hundreds of millions of LBC tokens for themselves, thus signaling to investors that it was committed to working to improve the value of the token.

Judge Barbadoro flatly rejected LBRY’s defense that the LBC token cannot be a security because the token has utility.5 The judge noted, “Nothing in the case law suggests that a token with both consumptive and speculative uses cannot be sold as an investment contract.” Likewise, Judge Barbadoro was unmoved by LBRY’s argument that it had no “fair notice” that the SEC would treat digital assets as unregistered securities simply because this was the first time the SEC had brought an enforcement action against an issuer of digital currency.6

In sum, if Judge Barbadoro’s reasoning is applied more broadly to the thousands of digital assets that have emerged over the last several years—including companies that tout the so called “utility” of their tokens—they will all likely be deemed digital asset securities that were offered and sold without a registration or an exemption from registration.

The LBRY decision is yet another case in which a court has concluded a digital asset is a security. Developers of digital assets must proceed with a high degree of caution. The SEC continues to display a high degree of willingness to initiate investigations and enforcement actions against issuers of digital assets that are viewed as securities under the Howey and Reeves tests, investment companies, or security-based swaps.

For more Securities Law and Digital Assets news, click here to visit the National Law Review.

Copyright ©2022 Nelson Mullins Riley & Scarborough LLP


FOOTNOTES

The SEC defines “digital assets” as intangible “asset[s] that [are] issued and transferred using distributed ledger or blockchain technology.” Statement on Digital Asset Securities Issuance and Trading, Division of Corporation Finance, Division of Investment Management, and Division of Trading and Markets, SEC (Nov. 16, 2018), available here.

SEC v. LBRY, Inc., No. 1:21-cv-00260-PB (D.N.H. filed Mar. 29, 2021), available here. A copy of the complaint against LBRY can be found here.

See, e.g., Gary Gensler, Speech – “A ‘New’ New Era: Prepared Remarks Before the International Swaps and Derivatives Association Annual Meeting” (May 11, 2022) (“My predecessor Jay Clayton said it, and I will reiterate it: Without prejudging any one token, most crypto tokens are investment contracts under the Supreme Court’s Howey Test.”), available here. Section 5(a) of the Securities Act of 1933 (the “Securities Act”) provides that, unless a registration statement is in effect as to a security, it is unlawful for any person, directly or indirectly, to sell securities in interstate commerce. Section 5(c) of the Securities Act provides a similar prohibition against offers to sell or offers to buy securities unless a registration statement has been filed.

SEC v. W.J. Howey Co., 328 U.S. 293 (1946). This case did not address when digital assets could be deemed debt securities under the test articulated by the U.S. Supreme Court in Reves v. Ernst & Young, 494 U.S. 56, 66-67 (1990), or when digital assets could be deemed an investment company under the Investment Company Acy of 1940. See, e.g., In the Matter of Blockfi Lending, Feb. 14, 2022, available here. This case also does not address when a digital asset is a security-based swap. See, e.g., In the Matter of Plutus Financial, Inc., (July 13, 2020), available here.

The argument a digital asset is not a security because it has “utility” is a favorite argument of critics of the SEC’s enforcement actions against issuers of digital assets. Unfortunately, the “utility” argument appears to be of little merit when the digital asset is offered and sold to raise capital.

This is an argument that has been made by a number of defendants in SEC enforcement actions involving digital asset securities.

“Red Flags in the Mind Set”: SEC Sanctions Three Broker/Dealers for Identity Theft Deficiencies

In 1975, around the time of “May Day” (1 May 1975), which brought the end of fixed commission rates and the birth of registered clearing agencies for securities trading (1976), the U. S. Securities and Exchange Commission (“SEC”) created a designated unit to deal with the growth of trading and the oversight of broker/dealers. That unit, the Office of Compliance Inspections and Examinations (the “OCIE”), evolved and grew over time. It regularly issued Risk Alerts on specific topics aimed at Broker/Dealers and/or Investment Advisers, expecting that those addressees would take appropriate steps to prevent the occurrence of the identified risk, or at least mitigate its impact on customers. On Sept. 15, 2020, the OCIE issued a Risk Alert entitled “Cybersecurity: Safeguarding Client Accounts against Credential Compromise,” which emphasized the importance of compliance with SEC Regulation S-ID, the “Identity Theft Red Flags Rule,” adopted May 20, 2013, under Sections of the Securities Exchange Act of 1934 (the “34 Act”) and the Investment Advisers Act of 1940, as amended (the “40 Act”). See, in that connection, the discussion of this and related SEC cyber regulations in my Nov. 19, 2020, Blog “Credential Stuffing: Cyber Intrusions into Client Accounts of Broker/Dealers and Investment Advisors.”

The SEC was required to adopt Regulation S-ID by a provision in the 2010 Dodd-Frank Wall Street Reform and Consumer Protection Act, which amended a provision of the Fair Credit Reporting Act of 1970 (“FCRA”) to add both the SEC and the Commodity Futures Trading Commission to the federal agencies that must have “red flag” rules. That “red flag” requirement for the seven federal prudential bank regulators and the Federal Trade Commission was made part of the FCRA by a 2003 amendment. Until Wednesday, July 27, 2022, the SEC had (despite the Sept. 15, 2020, Risk Alert) brought only one enforcement action for violating the “Red Flag” Rule (in 2018 when customers of the firm involved suffered harm from the identity thefts). In 2017, however, the Commission created a new unit in its Division of Enforcement to better address the growing risks of cyber intrusion in the U.S. capital markets, the Crypto Assets and Cyber Unit (“CACU”). That unit almost doubled in size recently with the addition of 20 newly assigned persons, as reported in an SEC Press Release of May 3, 2022. There the Commission stated the Unit “will continue to tackle the omnipresent cyber-related threats in the nation’s [capital] markets.” Also, underscoring the ever-increasing role played by the SEC in overseeing the operations of broker/dealers and investment advisers, the OCIE was renamed the Division of Examinations (“Exams”) on Dec. 17, 2020, elevating an “Office” of the SEC to a “Division.”

Examinations of three broker/dealers by personnel from Exams led the CACU to investigate all three, resulting in the institution of Administrative and Cease-and Desist Proceedings against each of the respondents for violations of Regulation S-ID. In those proceedings, the Commission alleged that the Identity Theft Protection Program (“ITPP”), which each respondent was required to have, was deficient. Regulation S-ID, including its Appendix A, sets forth both the requirements for an ITPP and types of red flags the Program should consider, and in Supplement A to Appendix A, includes examples of red flags from each category of possible risks. An ITPP must be in writing and should contain the following:

  1. Reasonable policies and procedures to identify, detect and respond appropriately to relevant red flags of the types likely to arise considering the firm’s business and the scope of its brokerage and/or advisory activities; and those policies and procedures should specify the responsive steps to be taken; broad generalizations will not suffice. Those policies and procedures should also describe the firm’s practices with respect to theft identification, prevention, and response, and direct that the firm document the steps to be taken in each case.
  2.  Requirements for periodic updates of the Program, including updates reflecting the firm’s experience with both a) identity theft; and b) changes in the firm’s business. In addition, the updates should address changes in the types and mechanisms of cybersecurity risks the firm might plausibly encounter.
  3. Requirements for periodic review of the types of accounts offered and the risks associated with each type.
  4. Provisions directing at least annual reports to the firm’s board of directors, and/or senior management, addressing the program’s effectiveness, including identity theft-related incidents and management responses to them.
  5. Provisions for training of staff in identity theft and the responses required by the firm’s ITPP.
  6. Requirements for monitoring third party service providers for compliance with identity theft provisions that meet those of the firm’s program.

The ITPP of each of the three broker/dealers was, as noted, found deficient. The first, J.P. Morgan Securities, LLC (“MORGAN”), organized under Delaware law and headquartered in New York, New York, is a wholly owned subsidiary of JPMorgan Chase & Co. (described by the Commission as “a global financial services firm” in its July 27, 2022, Order Instituting Administrative and Cease-and-Desist Proceedings [the “Morgan Order”]). Morgan is registered with the Commission as both a broker/dealer (since Dec. 13, 1985) and an investment adviser (since April 3, 1965). As recited in the Morgan Order, the SEC found Morgan offered and maintained customer accounts “primarily for personal, family, or household purposes that involve or are designed to permit multiple payments or transactions.” The order further notes that from Jan. 1, 2017, through Dec. 31, 2019, Morgan’s ITPP did not meet the requirements of Regulation S-ID because it “merely restated the general legal requirements” and did not specify how Morgan would identify a red flag or direct how to respond to it. The Morgan Order notes that although Morgan did take action to detect and respond to incidents of identity theft, the procedures followed were not in Morgan’s Program. Further, Morgan did not periodically update its program, even as both the types of accounts offered, and the extent of cybersecurity risks changed. The SEC also found Morgan did not adequately monitor its third-party service providers, and it failed to provide any identity theft-specific training to its staff. As a result, Morgan had violated Regulation S-ID. The order noted that Morgan “has undertaken substantial remedial acts, including auditing and revising … [its Program].” Nonetheless, Morgan was ordered to cease and desist from violating Regulation S-ID, was censured, and was ordered to pay a civil penalty of $1.2 million.

The second broker/dealer charged was UBS Financial Services Inc.(“UFS”), a Delaware corporation dually registered with the Commission as both a broker/dealer and an investment adviser since 1971. UFS, headquartered in Weehawken, New Jersey, is a subsidiary of UBS Group AG, a publicly traded major financial institution incorporated in Switzerland. In 2008, UBF adopted an ITPP (the “UBF Program”) pursuant to the 2003 amendments to the FCRA. The program applied both to UBF and to other affiliated entities and branch offices in the U.S. and Puerto Rico “which offered private and retail banking, mortgage, and private investment services that operated under UBS Group AG’s Wealth Management Americas’ line of business.” See my blog published on Aug. 22, 2022, “Only Sell What You Know: Swiss Bank Negligence is a Fraud on Clients,” for information about the origins and history of UBS Group AG.

The July 27, 2022, SEC Order instituting Administrative and Cease-and-Desist Proceedings against UBF (the “UBF Order”) stated that UBF made no change to the UBF Program when, in 2013, it became subject to Regulation S-ID, or thereafter from Jan. 1, 2017, to Dec. 31, 2019, other than to revise the list of entities and branches it covered. The Commission found UBF failed to update the UBF Program even as the accounts it offered changed, and without considering if some accounts offered by affiliated entities and branches are not “covered accounts” within regulation S-ID. The UBF Program did not have reasonable policies and procedures to identify red flags, taking into consideration account types and attendant risks, and did not specify what responses were required. The SEC also found the program wanting for not providing for periodic updates, especially addressing changes in accounts and/or in cybersecurity risks. The annual reports to the board of directors “did not provide sufficient information” to assess the UBF Program’s effectiveness or the adequacy of UBF’s monitoring of third-party service providers; indeed, the UBF Order notes the “board minutes do not reflect any discussion of compliance with Regulation S-ID.” In addition, UBF “did not conduct any training of its staff specific” to the UBF Program, including how to detect and respond to red flags.  As a result, the Commission found UBF in violation of Regulation S-ID. Although the Commission again noted the “substantial remedial acts” undertaken by UBF, including retaining “an outside consulting firm to review its Program” and to recommend change, the SEC nonetheless ordered UBF to cease and desist from violating the Regulation, censured UBF, and ordered it to pay a civil penalty of $925,000.

The third member of this broker/dealer trio is TradeStation Securities, Inc. (“TSS”), a Florida corporation headquartered in Plantation, Florida, that, according to the July 27, 2022, SEC Order Instituting Administrative and Cease-and-Desist Proceedings (the “TSS Order”), “provides primarily commission-free, directed online brokerage services to retail and institutional customers.” TSS has been registered with the SEC as a broker/dealer since January 1996. Their ITPP, too, was found deficient. The ITPP implemented by TSS (the “TSS Program”) essentially ignored the reality of TSS’s business as an online operation. For instance, the TSS Program cited only the red flags offered as “non-comprehensive examples in Supplement A to Appendix A” and not any “relevant to its business and the nature and scope of its brokerage activities.” Hence, the TSS Program cited the need to confirm the physical appearance of customers to make certain it was consistent with photographs or physical descriptions in the file. But an online broker/dealer would have scant opportunity to see a customer or a new customer in person, even when opening an account. Nor did TSS check the Supplement A red flag examples cited in the TSS Program when opening new customer accounts. The TSS Program directed only that “additional due diligence” should be performed if a red flag were identified, rather than directing specific responsive steps to be taken, such as not opening an account in a questionable situation. There were no requirements for periodic updates of the TSS Program. Indeed, “there were no material changes to the Program” after May 20, 2013, “despite significant changes in external cybersecurity risks related to identity theft.” At this point in the TSS Order, the Commission cited a finding in the Federal Register that “[a]dvancements in technology … have led to increasing threats to the integrity … of personal information.” The SEC found that TSS did not provide reports about the TSS Program and compliance with Regulation S-ID either to the TSS board or to a designated member of senior management, and that TSS had no adequate policies and procedures in place to monitor third-party service providers for compliance with detecting and preventing identity theft. The order is silent on the extent of TSS’s training of staff to deal with identity threats, but considering the other shortcomings, presumably such training was at best haphazard. The Commission found that TSS violated Regulation S-ID. Although the TSS Order noted (as with the other Proceedings) the “substantial remedial acts” undertaken by TSS, including retaining “an outside consulting firm” to aid compliance, the Commission nonetheless ordered TSS to cease-and-desist from violating the Regulation, censured TSS, and ordered it to pay a civil penalty of $425,000.

These three enforcement actions on the same day, especially ones involving two of the world’s leading financial institutions, signal a new level of attention by the Commission to cybersecurity risks to customers of broker/dealers and investment advisers, with a focus on the risks inherent in identity theft. As one leading law firm writing about these three actions advised, “[f]irms should review their ITPPs placing particular emphasis on identifying red flags tailored to their business and on conducting regular compliance reviews to update those red flags and related policies and procedures to reflect changes in business practices and risk.” That sound advice should be followed NOW, before the CACU comes calling.

For more Financial, Securities, and Banking Law news, click here to visit the National Law Review.

©2022 Norris McLaughlin P.A., All Rights Reserved

Following the Recent Regulatory Trends, NLRB General Counsel Seeks to Limit Employers’ Use of Artificial Intelligence in the Workplace

On October 31, 2022, the General Counsel of the National Labor Relations Board (“NLRB” or “Board”) released Memorandum GC 23-02 urging the Board to interpret existing Board law to adopt a new legal framework to find electronic monitoring and automated or algorithmic management practices illegal if such monitoring or management practices interfere with protected activities under Section 7 of the National Labor Relations Act (“Act”).  The Board’s General Counsel stated in the Memorandum that “[c]lose, constant surveillance and management through electronic means threaten employees’ basic ability to exercise their rights,” and urged the Board to find that an employer violates the Act where the employer’s electronic monitoring and management practices, when viewed as a whole, would tend to “interfere with or prevent a reasonable employee from engaging in activity protected by the Act.”  Given that position, it appears that the General Counsel believes that nearly all electronic monitoring and automated or algorithmic management practices violate the Act.

Under the General Counsel’s proposed framework, an employer can avoid a violation of the Act if it can demonstrate that its business needs require the electronic monitoring and management practices and the practices “outweigh” employees’ Section 7 rights.  Not only must the employer be able to make this showing, it must also demonstrate that it provided the employees advance notice of the technology used, the reason for its use, and how it uses the information obtained.  An employer is relieved of this obligation, according to the General Counsel, only if it can show “special circumstances” justifying “covert use” of the technology.

In GC 23-02, the General Counsel signaled to NLRB Regions that they should scrutinize a broad range of “automated management” and “algorithmic management” technologies, defined as “a diverse set of technological tools and techniques to remotely manage workforces, relying on data collection and surveillance of workers to enable automated or semi-automated decision-making.”  Technologies subject to this scrutiny include those used during working time, such as wearable devices, security cameras, and radio-frequency identification badges that record workers’ conversations and track the movements of employees, GPS tracking devices and cameras that keep track of the productivity and location of employees who are out on the road, and computer software that takes screenshots, webcam photos, or audio recordings.  Also subject to scrutiny are technologies employers may use to track employees while they are off duty, such as employer-issued phones and wearable devices, and applications installed on employees’ personal devices.  Finally, the General Counsel noted that an employer that uses such technologies to hire employees, such as online cognitive assessments and reviews of social media, “pry into job applicants’ private lives.”  Thus, these pre-hire practices may also violate of the Act.  Technologies such as resume readers and other automated selection tools used during hiring and promotion may also be subject to GC 23-02.

GC 23-02 follows the wave of recent federal guidance from the White House, the Equal Employment Opportunity Commission, and local laws that attempt to define, regulate, and monitor the use of artificial intelligence in decision-making capacities.  Like these regulations and guidance, GC 23-02 raises more questions than it answers.  For example, GC 23-02 does not identify the standards for determining whether business needs “outweigh” employees’ Section 7 rights, or what constitutes “special circumstances” that an employer must show to avoid scrutiny under the Act.

While GC 23-02 sets forth the General Counsel’s proposal and thus is not legally binding, it does signal that there will likely be disputes in the future over artificial intelligence in the employment context.

©2022 Epstein Becker & Green, P.C. All rights reserved.