Walgreens Settles for $106.8 Million Over FCA Violations

On September 13, the US Department of Justice (DOJ) announced that Walgreens Boots Alliance Inc. and Walgreen Co. (collectively, Walgreens) agreed to pay $106.8 million to resolve allegations of violating the False Claims Act (FCA) and state statutes. The allegations pertain to billing government health care programs for prescriptions that were never dispensed. The government alleged that from 2009 until 2020, Walgreens submitted claims to federal health care programs for prescriptions that were processed but never picked up by beneficiaries. This resulted in Walgreens receiving 10s of millions of dollars for prescriptions that were never actually provided to health care beneficiaries.

Under the resolution, Walgreens agreed to enhance its electronic pharmacy management system to prevent future occurrences and self-reported certain conduct. In addition, Walgreens refunded $66,314,790 related to the settled claims, which allowed Walgreens to receive credit under the DOJ’s guidelines for taking disclosure, cooperation, and remediation into account in FCA cases.

Under the settlement agreement, the federal government received $91,881,530, and the individual states received $14,933,259 through separate settlement agreements. The settlement will resolve three cases pending in the District of New Mexico, Eastern District of Texas, and Middle District of Florida under the qui tam, or whistleblower, provision of the FCA. Whistleblowers Steven Turck and Andrew Bustos, former Walgreens employees, will receive $14,918,675 and $1,620,000, respectively, for their roles in filing the suits.

The DOJ’s press release can be found here.

CVS Health Subsidiary Settles FCA Allegations for $60 Million

On September 16, Chicago company Oak Street Health, a subsidiary of CVS Health, agreed to pay $60 million to resolve allegations that it violated the FCA by paying kickbacks to third-party insurance agents in exchange for recruiting seniors to Oak Street Health’s primary care clinics from September 2020 through December 2022.

According to the DOJ, in 2020, Oak Street Health developed a program called the Client Awareness Program. Under the program, which was developed to increase patient membership, seniors who were eligible for Medicare Advantage received marketing messages designed to generate interest in Oak Street Health. Upon receipt of these messages, third-party insurance agents organized three-way phone calls with Oak Street Health employees for the interested seniors. Oak Street Health paid agents around $200 per beneficiary referred or recommended as part of this service. Instead of basing referrals and recommendations on the best interest of the seniors, these payments allegedly encouraged agents to base referrals and recommendations on Oak Street Health’s financial interests.

The DOJ’s press release can be found here.

Dunes Surgical Hospital Settles for $12.76 Million Over FCA Violations

On September 16, South Dakota companies Siouxland Surgery Center LLP, d.b.a. Dunes Surgical Hospital, United Surgical Partners International Inc. (USPI), and USP Siouxland Inc. agreed to pay approximately $12.76 million to settle FCA allegations related to improper financial relationships between Dunes and two physician groups. Since July 1, 2014, USPI has maintained partial ownership of Dunes through USP Siouxland, a wholly owned subsidiary of USPI. Following an internal investigation, Dunes and USPI disclosed the arrangements at issue to the government.

From at least 2014 through 2019, Dunes allegedly made financial contributions to a nonprofit affiliate of a physician group whose physicians referred patients to Dunes. According to the complaint, those payments allegedly funded the salaries of referring employees. Other allegations include that Dunes provided a different physician group with below-market-value clinic space, staff, and supplies. The DOJ alleged that these arrangements violated both the Anti-Kickback Statute and the Stark Law, which are “designed to ensure that decisions about patient care are based on physicians’ independent medical judgment and not their personal financial interest.”

Following Dunes’ and USPI’s internal compliance review and independent investigation, the companies promptly took remedial actions and disclosed such arrangements to the DOJ. The companies also provided the government with detailed and thorough written disclosures and cooperated throughout its investigation, resulting in cooperation credit for the companies.

Under the settlement, Dunes and USPI will pay $12.76 million to the federal government for alleged violations of the FCA, and approximately $1.37 million to South Dakota, Iowa, and Nebraska for their share of the Medicaid portion of the settlement.

The DOJ’s press release can be found here.

California Man Convicted for Paying Illegal Kickbacks for Patient Referrals to Addiction Treatment Facilities

On September 11, a federal jury convicted Casey Mahoney, 48, of Los Angeles, for paying nearly $2.9 million in illegal kickbacks for patient referrals to his addiction treatment facilities in Orange County, California. The facilities involved are Healing Path Detox LLC and Get Real Recovery Inc.

According to court documents and evidence presented at trial, Mahoney paid illegal kickbacks to “body brokers” who referred patients to his facilities. These brokers appeared to pay thousands of dollars in cash to patients to induce them to procure treatment at Mahoney’s facilities. Mahoney allegedly concealed these illegal kickbacks through sham contracts with the body brokers. The contracts purportedly required fixed payments and prohibited payments based on the volume or value of patient referrals, when in reality, payments were negotiated based on patients’ insurance reimbursements and the number of days Mahoney could bill for treatment. Mahoney also allegedly laundered the proceeds of the conspiracy through payments to the mother of one of the body brokers, falsely characterizing them as consulting fees.

The Eliminating Kickbacks in Recovery Act formed the basis of the charges against Mahoney. He was convicted of one count of conspiracy to solicit, receive, pay, or offer illegal remunerations for patient referrals, seven counts of illegal remunerations for patient referrals, and three counts of money laundering. He is scheduled to be sentenced on January 17, 2025, and faces a maximum penalty of five years in prison for the conspiracy charge, 10 years in prison for each illegal remuneration count, and 20 years in prison for each money laundering count.

The DOJ’s press release can be found here.

© 2024 ArentFox Schiff LLP

by: D. Jacques SmithRandall A. BraterMichael F. DearingtonNadia PatelHillary M. Stemple, and Rebekkah R.N. Stoeckler of ArentFox Schiff LLP

For more news on FCA Violations visit the NLR Criminal Law Business Crimes section.

Consumer Privacy Update: What Organizations Need to Know About Impending State Privacy Laws Going into Effect in 2024 and 2025

Over the past several years, the number of states with comprehensive consumer data privacy laws has increased exponentially from just a handful—California, Colorado, Virginia, Connecticut, and Utah—to up to twenty by some counts.

Many of these state laws will go into effect starting Q4 of 2024 through 2025. We have previously written in more detail on New Jersey’s comprehensive data privacy law, which goes into effect January 15, 2025, and Tennessee’s comprehensive data privacy law, which goes into effect July 1, 2025. Some laws have already gone into effect, like Texas’s Data Privacy and Security Act, and Oregon’s Consumer Privacy Act, both of which became effective July of 2024. Now is a good time to take stock of the current landscape as the next batch of state privacy laws go into effect.

Over the next year, the following laws will become effective:

  1. Montana Consumer Data Privacy Act (effective Oct. 1, 2024)
  2. Delaware Personal Data Privacy Act (effective Jan. 1, 2025)
  3. Iowa Consumer Data Protection Act (effective Jan. 1, 2025)
  4. Nebraska Data Privacy Act (effective Jan. 1, 2025)
  5. New Hampshire Privacy Act (effective Jan. 1, 2025)
  6. New Jersey Data Privacy Act (effective Jan. 15, 2025)
  7. Tennessee Information Protection Act (effective July 1, 2025)
  8. Minnesota Consumer Data Privacy Act (effective July 31, 2025)
  9. Maryland Online Data Privacy Act (effective Oct. 1, 2025)

These nine state privacy laws contain many similarities, broadly conforming to the Virginia Consumer Data Protection Act we discussed here.  All nine laws listed above contain the following familiar requirements:

(1) disclosing data handling practices to consumers,

(2) including certain contractual terms in data processing agreements,

(3) performing risk assessments (with the exception of Iowa); and

(4) affording resident consumers with certain rights, such as the right to access or know the personal data processed by a business, the right to correct any inaccurate personal data, the right to request deletion of personal data, the right to opt out of targeted advertising or the sale of personal data, and the right to opt out of the processing sensitive information.

The laws contain more than a few noteworthy differences. Each of the laws differs in terms of the scope of their application. The applicability thresholds vary based on: (1) the number of state residents whose personal data the company (or “controller”) controls or processes, or (2) the proportion of revenue a controller derives from the sale of personal data. Maryland, Delaware, and New Hampshire each have a 35,000 consumer processing threshold. Nebraska, similar to the recently passed data privacy law in Texas, applies to controllers that that do not qualify as small business and process personal data or engage in personal data sales. It is also important to note that Iowa adopted a comparatively narrower definition of what constitutes as sale of personal data to only transactions involving monetary consideration. All states require that the company conduct business in the state.

With respect to the Health Insurance Portability and Accountability Act of 1996 (“HIPAA”), Iowa’s, Montana’s, Nebraska’s, New Hampshire’s, and Tennessee’s laws exempt HIPAA-regulated entities altogether; while Delaware’s, Maryland’s, Minnesota’s, and New Jersey’s laws exempt only protected health information (“PHI”) under HIPAA. As a result, HIPAA-regulated entities will have the added burden of assessing whether data is covered by HIPAA or an applicable state privacy law.

With respect to the Gramm-Leach-Bliley Act (“GLBA”), eight of these nine comprehensive privacy laws contain an entity-level exemption for GBLA-covered financial institutions. By contrast, Minnesota’s law exempts only data regulated by GLBA. Minnesota joins California and Oregon as the three state consumer privacy laws with information-level GLBA exemptions.

Not least of all, Maryland’s law stands apart from the other data privacy laws due to a number of unique obligations, including:

  • A prohibition on the collection, processing, and sharing of a consumer’s sensitive data except when doing so is “strictly necessary to provide or maintain a specific product or service requested by the consumer.”
  • A broad prohibition on the sale of sensitive data for monetary or other valuable consideration unless such sale is necessary to provide or maintain a specific product or service requested by a consumer.
  • Special provisions applicable to “Consumer Health Data” processed by entities not regulated by HIPAA. Note that “Consumer Health Data” laws also exist in Nevada, Washington, and Connecticut as we previously discussed here.
  • A prohibition on selling or processing minors’ data for targeted advertising if the controller knows or should have known that the consumer is under 18 years of age.

While states continue to enact comprehensive data privacy laws, there remains the possibility of a federal privacy law to bring in a national standard. The American Privacy Rights Act (“APRA”) recently went through several iterations in the House Committee on Energy and Commerce this year, and it reflects many of the elements of these state laws, including transparency requirements and consumer rights. A key sticking point, however, continues to be the broad private right of action included in the proposed APRA but absent from all state privacy laws. Only California’s law, which we discussed here, has a private right of action, although it is narrowly circumscribed to data breaches.  Considering the November 2024 election cycle, it is likely that federal efforts to create a comprehensive privacy law will stall until the election cycle is over and the composition of the White House and Congress is known.

EVERYTHING’S FINE: Big TCPA Win For Medical Debt Collector Suggests FCC Rulings Still Binding After Loper Bright–Let’s Hope it Stays That Way

Fascinating little case for you all today.

Consumer visits hospital for treatment. Provides phone number at admission. Receives treatment and is discharged.

Consumer fails to pay resulting invoices. Hospital and provider network turn account over to collections. Debt collector allegedly uses an ATDS to call consumer on the number she provided.

What result?

Prior to the Supreme Court’s Loper Bright decision the determination would be easy. The FCC held back in 2009 that providing a number in connection with a transaction permits autodialed calls to a consumer in connection with that transaction. And the Sixth Circuit Court of Appeals has directly held that providing a phone number on hospital intake documents permits later debt collection activity at that number–including via autodialer.

But the Loper Bright decision recently destroyed Chevron deference–meaning courts no longer have to yield to agency determinations of this sort. And while the Hobbs Act affords extra protections to certain FCC rulings, those protections only apply where certain procedural requirements were met by the Commission in adopting the rule.

So does the FCC’s rule from 2009 permitting informational calls to numbers provided in connection with a transaction still bind courts? According to the decision in Woodman v. Medicredit, 2024 WL 4132732 (D. Nv Sept. 9, 2024) the answer is yes!

In Woodman the defendant debt collector moved for summary judgment arguing the Plaintiff consented when she provided her number to the hospital. The Court had little problem applying the FCC’s 2009 order and precedent that came before Loper Bright to grant summary judgment to he defense. So just like that case is gone.

Great ruling for the defense, of course, and it makes me feel a bit better about the whole “no one knows what the law is anymore” thing, but the Woodman court didn’t really address the core issue– was the 2009 ruling enacted with sufficient APA pop and circumstance to merit Hobbs Act deference under PDR Resources. 

Really interesting question and one folks should keep in mind.

FTC Announces Final Rule Imposing Civil Penalties for Fake Consumer Reviews and Testimonials

On August 14, 2024, the Federal Trade Commission announced a Final Rule combatting bogus consumer reviews and testimonials by prohibiting their sale or purchase. The Rule allows the FTC to strengthen enforcement, seek civil penalties against violators and deter AI-generated fake reviews.

“Fake reviews not only waste people’s time and money, but also pollute the marketplace and divert business away from honest competitors,” said FTC attorney Chair Lina M. Khan. “By strengthening the FTC’s toolkit to fight deceptive advertising, the final rule will protect Americans from getting cheated, put businesses that unlawfully game the system on notice, and promote markets that are fair, honest, and competitive.”

The Rule announced on August 14, 2024 follows an advance notice of proposed rulemaking and a notice of proposed rulemaking announced in November 2022 and June 2023, respectively. The FTC also held an informal hearing on the proposed rule in February 2024. In response to public comments, the Commission made numerous clarifications and adjustments to its previous proposal.

What Does the FTC Final on the Use of Consumer Reviews and Testimonials Prohibit?

The FTC Final Rule on the Use of Consumer Reviews and Testimonials prohibits:

Writing, selling, or buying fake or false consumer reviews. 

The Rule prohibits businesses from writing or selling consumer reviews that misrepresent they are by someone who does not exist or who did not have actual experience with the business or its products or services, or that misrepresent the reviewers’ experience. It also prohibits businesses from buying consumer reviews that they knew or should have known made such a misrepresentation. Businesses are also prohibited from procuring from certain company insiders such reviews about the business or its products or services for posting on third-party sites, when the businesses knew or should have known about the misrepresentation. The prohibitions on buying or procuring reviews do not cover generalized review solicitations to past customers or simply hosting reviews on the business’s website. Neither will a retailer or other entity be liable for sharing consumer reviews unless it would have been liable for displaying those same reviews on its own website.

Writing, selling, or disseminating fake or false testimonials. 

Businesses are similarly prohibited from writing or selling consumer or celebrity testimonials that make the same kinds of misrepresentations. The are also prohibited from disseminating or causing the dissemination of such testimonials when they knew or should have known about the misrepresentation. The prohibition on disseminating testimonials does not cover the type of generalized solicitations to past customers discussed above with respect to reviews.

Buying positive or negative reviews.

Businesses are prohibited from providing compensation or other incentives contingent on the writing of consumer reviews expressing a particular sentiment, either positive or negative. Violations here include situations in which such a contingency is express or implied. So, for example, while it prohibits offering $25 for a 5-star review, it also prohibits offering $25 for a review “telling everyone how much you love our product.”

Failing to make disclosures about insider reviews and testimonials.

The Rule prohibits a company’s officers and managers from writing reviews or testimonials about the business or its products or services without clearly disclosing their relationship. Businesses are also prohibited from disseminating testimonials by company insiders without clear disclosures, if the businesses knew or should have known of the relationship. A similar prohibition exists for officer or manager solicitations of reviews from their immediate relatives or from employees or agents of the business, and when officers or managers ask employees or agents to seek such reviews from relatives. For these various solicitations, the Rule is violated only if: (i) the officers or managers did not give instructions about making clear disclosures; (ii) the resulting reviews – either by the employees, agents, or the immediate relatives of the officers, managers, employees, or agents – appear without clear disclosures; and (iii) the officers or managers knew or should have known that such reviews appeared and failed to take steps to have those reviews either removed or amended to include clear disclosures. All of these prohibitions hinge on the undisclosed relationship being material to consumers. These disclosure provisions also clarify that they do not cover mere review hosting or generalized solicitations to past customers.

Deceptively claiming that company-controlled review websites are independent.

Businesses are prohibited from misrepresenting that websites or entities they control or operate are providing independent reviews or opinions, other than consumer reviews, about a category of businesses, products, or services that includes their own business, product, or service.

Illegally suppressing negative reviews.

The Rule prohibits using unfounded or groundless legal threats, physical threats, intimidation or public false accusations (when the accusation is made with knowledge that it is false or with reckless disregard as to its truth or falsity) to prevent the posting or cause the removal of all or part of a consumer review. Legal threats are “unfounded or groundless” if they are unwarranted by existing law or based on allegations that have no evidentiary support, according to the FTC. Also, if reviews on a marketer’s website have been suppressed based on their rating or negative sentiment, the Rule prohibits that business from misrepresenting that the reviews on a portion of its website dedicated to receiving and displaying such reviews represent most or all submitted reviews.

Selling and buying fake social media indicators.

The Rule prohibits the sale or distribution of fake indicators of social media influence, like fake followers or views. A “fake” indicator means one generated by a bot, a hijacked account, or that otherwise does not reflect a real individual’s or entity’s activities or opinions, according to the FTC. The Rule also bars anyone from buying or procuring such fake indicators. These prohibitions are limited to situations in which the violator knew or should have known that the indicators were fake and which involved misrepresentations of a person’s or company’s influence or importance for a commercial purpose.

The Rule does not specifically refer to AI. However, according to the FTC, these prohibitions cover situations when someone uses an AI tool to generate the deceptive content at issue.

According to the FTC, case-by-case enforcement without civil penalty authority might not be enough to deter clearly deceptive review and testimonial practices. The Supreme Court’s decision in AMG Capital Management LLC v. FTC has hindered the FTC’s ability to seek monetary relief for consumers under the FTC Act. The Rule is intended to enhance deterrence and strengthen FTC enforcement actions.

The Rule will become effective 60 days after the date it’s published in the Federal Register.

Takeaway: The FTC will aggressively enforce the new Rule. The agency has challenged illegal practices regarding bogus reviews and testimonials for quite some time. In addition to investigations and enforcement actions, the FTC has also issued guidance to help businesses to comply. According to the agency, online marketplaces and social media companies could and should do more when it comes to policing their platforms.

FCC’s New Notice of Inquiry – Is This Big Brother’s Origin Story?

The FCC’s recent Notice of Proposed Rulemaking and Notice of Inquiry was released on August 8, 2024. While the proposed Rule is, deservedly, getting the most press, it’s important to pay attention to the Notice of Inquiry.

The part which is concerning to me is the FCC’s interest in “development and availability of technologies on either the device or network level that can: 1) detect incoming calls that are potentially fraudulent and/or AI-generated based on real-time analysis of voice call content; 2) alert consumers to the potential that such voice calls are fraudulent and/or AI-generated; and 3) potentially block future voice calls that can be identified as similar AI-generated or otherwise fraudulent voice calls based on analytics.” (emphasis mine)

The FCC also wants to know “what steps can the Commission take to encourage the development and deployment of these technologies…”

The FCC does note there are “significant privacy risks, insofar as they appear to rely on analysis and processing of the content of calls.” The FCC also wants comments on “what protections exist for non-malicious callers who have a legitimate privacy interest in not having the contents of their calls collected and processed by unknown third parties?”

So, the Federal Communications Commission wants to monitor the CONTENT of voice calls. In real-time. On your device.

That’s not a problem for anyone else?

Sure, robocalls are bad. There are scams on robocalls.

But, are robocalls so bad that we need real-time monitoring of voice call content?

At what point, did we throw the Fourth Amendment out of the window and to prevent what? Phone calls??

The basic premise of the Fourth Amendment is “to safeguard the privacy and security of individuals against arbitrary invasions by governmental officials.” I’m not sure how we get more arbitrary than “this incoming call is a fraud” versus “this incoming call is not a fraud”.

So, maybe you consent to this real-time monitoring. Sure, ok. But, can you actually give informed consent to what would happen with this monitoring?

Let me give you three examples of “pre-recorded calls” that the real-time monitoring could overhear to determine if the “voice calls are fraudulent and/or AI-generated”:

  1. Your phone rings. It’s a prerecorded call from Planned Parenthood confirming your appointment for tomorrow.
  2. Your phone rings. It’s an artificial voice recording from your lawyer’s office telling you that your criminal trial is tomorrow.
  3. Your phone rings. It’s the local jewelry store saying your ring is repaired and ready to be picked up.

Those are basic examples, but for them to someone to “detect incoming calls that are potentially fraudulent and/or AI-generated based on real-time analysis of voice call content”, those calls have to be monitored in real-time. And stored somewhere. Maybe on your device. Maybe by a third-party in their cloud.

Maybe you trust Apple with that info. But, do you trust someone who comes up with fraudulent monitoring software that would harvest that data? How do you know you should trust that party?

Or you trust Google. Surely, Google wouldn’t use your personal data. Surely, they would not use your phone call history to sell ads.

And that becomes data a third-party can use. For ads. For political messaging. For profiling.

Yes, this is extremely conspiratorial. But, that doesn’t mean your data is not valuable. And where there is valuable data, there are people willing to exploit it.

Robocalls are a problem. And there are some legitimate businesses doing great things with fraud detection monitoring. But, a real-time monitoring edict from the government is not the solution. As an industry, we can be smarter on how we handle this.

FDA Releases Summary Report on Fresh Herbs Sampling Assignment

  • On July 26, 2024, the U.S. Food and Drug Administration (FDA) released findings from its sampling assignment that collected and tested domestic and imported basil, cilantro, and parsley. FDA sought to estimate the prevalence of Cyclospora, Salmonella, and Shiga toxin-producing Escherichia coli (STEC) in these herbs as part of its ongoing effort to ensure food safety and prevent contamination.
  • From September 2017 to September 2021, FDA collected and tested 1,383 samples of fresh basil, cilantro, and parsley. The Agency detected Salmonella in 17 samples, detected Cyclospora in 18 samples, and detected STEC in 1 sample. The contaminated products were quickly removed from the market.
  • The sampling assignment was conducted in response to food-borne illness outbreaks of Cyclospora, Salmonella, and STEC. From 2000 through 2016, cilantro was potentially linked to at least three outbreaks in the US. And since 2017, the US has experienced at least six additional outbreaks involving basil, cilantro, and parsley. More than 1,200 illnesses and 80 hospitalizations were tied to these outbreaks.

U.S. Sues TikTok for Children’s Online Privacy Protection Act (COPPA) Violations

On Friday, August 2, 2024, the United States sued ByteDance, TikTok, and its affiliates for violating the Children’s Online Privacy Protection Act of 1998 (“COPPA”) and the Children’s Online Privacy Protection Rule (“COPPA Rule”). In its complaint, the Department of Justice alleges TikTok collected, stored, and processed vast amounts of data from millions of child users of its popular social media app.

In June, the FTC voted to refer the matter to the DOJ, stating that it had determined there was reason to believe TikTok (f.k.a. Musical.ly, Inc.) had violated a FTC 2019 consent order and that the agency had also uncovered additional potential COPPA and FTC Act violations. The lawsuit filed today in the Central District of California, alleges that TikTok is directed to children under age 13, that Tik Tok has permitted children to evade its age gate, that TikTok has collected data from children without first notifying their parents and obtaining verifiable parental consent, that TikTok has failed to honor parents’ requests to delete their children’s accounts and information, and that TikTok has failed to delete the accounts and information of users the company knows are children. The complaint also alleges that TikTok failed to comply with COPPA even for accounts in the platform’s “Kids Mode” and that TikTok improperly amassed profiles on Kids Mode users. The complaint seeks civil penalties of up to $51,744 per violation per day from January 10, 2024, to present for the improper collection of children’s data, as well as permanent injunctive relief to prevent future violations of the COPPA Rule.

The lawsuit comes on the heels of the U.S. Senate passage this week of the Children and Teens’ Online Privacy Protection Act (COPPA 2.0) and the Kids Online Safety Act (KOSA) by a 91-3 bipartisan vote. It is unknown whether the House will take up the bills when it returns from recess in September.

“Arbitrary and Capricious” – A Sign of Things to Come?

On July 3, 2024, the US District Court of Northern Texas issued a Memorandum Opinion and Order in the combined cases of Americans for Beneficiary Choice, et al. v. United States Department of Health and Human Services (Civ. Action No. 4:24-cv-00439) and Council for Medicare Council, et al., v. United States Department of Health and Human Services (Civ. Action No. 4:24-cv-00446).

The Plaintiffs (in this combined case) challenged the Centers for Medicare and Medicaid Services (“CMS”) rule issued earlier this year. The new rules attempt to place reimbursements to third-party firms into the definition of compensation where the prior rules did not include reimbursements into the definition of compensation which would have been subject to the regulatory cap on compensation.

This Memorandum Opinion Order granted the Plaintiffs’ Motion for a Stay in part and denied it in part. The Motion was granted in relation to the new CMS rules around compensation paid by Medicare Advantage and Part D plans to independent agents and brokers who help beneficiaries select and enroll in private plans.

The Court found that the compensation changes were arbitrary and capricious and that the Plaintiffs were substantially likely to succeed on the merits of the case. The Court found that CMS failed to substantiate key parts of the final rule. During the rulemaking process, industry commenters asked for clarification around parts of the rule, but CMS claimed “the sources Plaintiffs criticized were not significant enough to warrant defending them.” The Court found “because CMS failed to address important problems to their central evidence…that members of the public raised during the comment period, those aspects of the Final Rule are most likely arbitrary and capricious.”

One of the Plaintiffs, Americans for Beneficiary Choice, also challenged the consent requirement of the final rule. The final rule states that personal beneficiary data collected by a third party marketing organization (“TPMO”) can only be shared with another TPMO if the beneficiary gives prior express written consent. The Plaintiff argued that the consent requirement is “in tension with HIPAA’s broader purpose of facilitating data sharing” and CMS stated that HIPAA might facilitate data sharing, but that does not limit CMS’s ability to limit certain harmful data-sharing practices. The Court denied the Motion to Stay regarding the consent requirement, but interestingly stated that Plaintiff’s “claim regarding the Consent Requirement may ultimately have merit, [Plaintiff]’s current briefing does not demonstrate a substantial likelihood of success at this stage”.

What does this mean now that we are less than 90 days from the start of the 2025 Medicare Advantage/Part D contract year?

  1. The consent requirement is still moving forward – While the memorandum order hints at the possibility of it being rejected, as of right now, TPMO’s must get prior express written consent before sharing personal beneficiary data with another TPMO.
  2. The fixed-fee and contract-terms restrictions in the final rule have had their effective date’s stayed until this suit is resolved. Therefore, the compensation scheme that was in place last year is essentially the same for those two sections.

How does this affect the FCC’s 1:1 Ruling?

It doesn’t. While this case does show that courts are willing to look critically at agencies’s rulemaking process, the FCC’s 1:1 consent requirement is different than the compensation changes set forth by CMS.

The FCC arguably just clarified the existing rule around prior express written consent by requiring the consent to “authorize no more than one identified seller”.

CMS, on the other hand, attempted to make wholesale changes and “began to set fixed rates for a wide range of administrative payments that were previously uncapped and unregulated as compensation.”

There is still the IMC case against the FCC , so there is the possibility (albeit small) there could be relief coming in that case. However, the advice here is to continue planning for obtaining consent to share personal beneficiary data AND single seller consent.

The Privacy Patchwork: Beyond US State “Comprehensive” Laws

We’ve cautioned before about the danger of thinking only about US state “comprehensive” laws when looking to legal privacy and data security obligations in the United States. We’ve also mentioned that the US has a patchwork of privacy laws. That patchwork is found to a certain extent outside of the US as well. What laws exist in the patchwork that relate to a company’s activities?

There are laws that apply when companies host websites, including the most well-known, the California Privacy Protection Act (CalOPPA). It has been in effect since July 2004, thus predating COPPA by 14 years. Then there are laws the apply if a company is collecting and using biometric identifiers, like Illinois’ Biometric Information Privacy Act.

Companies are subject to specific laws both in the US and elsewhere when engaging in digital communications. These laws include the US federal laws TCPA and TCFAPA, as well as CAN-SPAM. Digital communication laws exist in countries as wide ranging as Australia, Canada, Morocco, and many others. Then we have laws that apply when collecting information during a credit card transaction, like the Song Beverly Credit Card Act (California).

Putting It Into Practice: When assessing your company’s obligations under privacy and data security laws, keep activity specific privacy laws in mind. Depending on what you are doing, and in what jurisdictions, you may have more obligations to address than simply those found in comprehensive privacy laws.

American Privacy Rights Act Advances with Significant Revisions

On May 23, 2024, the U.S. House Committee on Energy and Commerce Subcommittee on Data, Innovation, and Commerce approved a revised draft of the American Privacy Rights Act (“APRA”), which was released just 36 hours before the markup session. With the subcommittee’s approval, the APRA will now advance to full committee consideration. The revised draft includes several notable changes from the initial discussion draft, including:

  • New Section on COPPA 2.0 – the revised APRA draft includes the Children’s Online Privacy Protection Act (COPPA 2.0) under Title II, which differs to a certain degree from the COPPA 2.0 proposal currently before the Senate (e.g., removal of the revised “actual knowledge” standard; removal of applicability to teens over age 12 and under age 17).
  • New Section on Privacy By Design – the revised APRA draft includes a new dedicated section on privacy by design. This section requires covered entities, service providers and third parties to establish, implement, and maintain reasonable policies, practices and procedures that identify, assess and mitigate privacy risks related to their products and services during the design, development and implementation stages, including risks to covered minors.
  • Expansion of Public Research Permitted Purpose – as an exception to the general data minimization obligation, the revised APRA draft adds another permissible purpose for processing data for public or peer-reviewed scientific, historical, or statistical research projects. These research projects must be in the public interest and comply with all relevant laws and regulations. If the research involves transferring sensitive covered data, the revised APRA draft requires the affirmative express consent of the affected individuals.
  • Expanded Obligations for Data Brokers – the revised APRA draft expands obligations for data brokers by requiring them to include a mechanism for individuals to submit a “Delete My Data” request. This mechanism, similar to the California Delete Act, requires data brokers to delete all covered data related to an individual that they did not collect directly from that individual, if the individual so requests.
  • Changes to Algorithmic Impact Assessments – while the initial APRA draft required large data holders to conduct and report a covered algorithmic impact assessment to the FTC if they used a covered algorithm posing a consequential risk of harm to individuals, the revised APRA requires such impact assessments for covered algorithms to make a “consequential decision.” The revised draft also allows large data holders to use certified independent auditors to conduct the impact assessments, directs the reporting mechanism to NIST instead of the FTC, and expands requirements related to algorithm design evaluations.
  • Consequential Decision Opt-Out – while the initial APRA draft allowed individuals to invoke an opt-out right against covered entities’ use of a covered algorithm making or facilitating a consequential decision, the revised draft now also allows individuals to request that consequential decisions be made by a human.
  • New and/or Revised Definitions – the revised APRA draft’s definition section includes new terms, such as “contextual advertising” and “first party advertising.”. The revised APRA draft also redefines certain terms, including “covered algorithm,” “sensitive covered data,” “small business” and “targeted advertising.”