FTC Surveillance Pricing Study Uncovers Personal Data Used to Set Individualized Consumer Prices

The Federal Trade Commission’s initial findings from its surveillance pricing market study revealed that details like a person’s precise location or browser history can be frequently used to target individual consumers with different prices for the same goods and services.

The staff perspective is based on an examination of documents obtained by FTC staff’s 6(b) orders sent to several companies in July aiming to better understand the “shadowy market that third-party intermediaries use to set individualized prices for products and services based on consumers’ characteristics and behaviors, like location, demographics, browsing patterns and shopping history.”

Staff found that consumer behaviors ranging from mouse movements on a webpage to the type of products that consumers leave unpurchased in an online shopping cart can be tracked and used by retailers to tailor consumer pricing.

“Initial staff findings show that retailers frequently use people’s personal information to set targeted, tailored prices for goods and services—from a person’s location and demographics, down to their mouse movements on a webpage,” said FTC Chair Lina M. Khan. “The FTC should continue to investigate surveillance pricing practices because Americans deserve to know how their private data is being used to set the prices they pay and whether firms are charging different people different prices for the same good or service.”

The FTC’s study of the 6(b) documents is still ongoing. The staff perspective is based on an initial analysis of documents provided by Mastercard, Accenture, PROS, Bloomreach, Revionics and McKinsey & Co.

The FTC’s 6(b) study focuses on intermediary firms, which are the middlemen hired by retailers that can algorithmically tweak and target their prices. Instead of a price or promotion being a static feature of a product, the same product could have a different price or promotion based on a variety of inputs—including consumer-related data and their behaviors and preferences, the location, time, and channels by which a consumer buys the product, according to the perspective.

The agency will only release information obtained from a 6(b) study as long as all data has been aggregated or anonymized to protect confidential trade secrets from company respondents, and therefore the staff perspective only includes hypothetical examples of surveillance pricing.

The staff perspective found that some 6(b) respondents can determine individualized and different pricing and discounts based on granular consumer data, like a cosmetics company targeting promotions to specific skin types and skin tones. The perspective also found that the intermediaries the FTC examined can show higher priced products based on consumers’ search and purchase activity.

As one hypothetical outlined, a consumer who is profiled as a new parent may intentionally be shown higher priced baby thermometers on the first page of their search results.

The FTC staff found that the intermediaries worked with at least 250 clients that sell goods or services ranging from grocery stores to apparel retailers. The FTC found that widespread adoption of this practice may fundamentally upend how consumers buy products and how companies compete.

As the FTC continues its work in this area, it issued a request for information seeking public comment on consumers’ experiences with surveillance pricing. The RFI also asked for comments from businesses about whether surveillance pricing tools can lead to competitors gaining an unfair advantage, and whether gig workers or employees have been impacted by the use of surveillance pricing to determine their compensation.

The Commission voted 3-2 to allow staff to issue the report. Commissioners Andrew Ferguson and Melissa Holyoak issued a dissenting statement related to the release of the initial research summaries.

The FTC has additional resources on the interim findings, including a blog post advocating for further engagement with this issuean issue spotlight with more background and research on surveillance pricing and research summaries based on the staff review and initial insights of 6(b) study documents.

Breaking News: U.S. Supreme Court Upholds TikTok Ban Law

On January 17, 2024, the Supreme Court of the United States (“SCOTUS”) unanimously upheld the Protecting Americans from Foreign Adversary Controlled Applications Act (the “Act”), which restricts companies from making foreign adversary controlled applications available (i.e., on an app store) and from providing hosting services with respect to such apps. The Act does not apply to covered applications for which a qualified divestiture is executed.

The result of this ruling is that TikTok, an app which is owned by Chinese company ByteDance and qualifies as a foreign adversary controlled application under the Act, will face a ban when the law enters into effect on January 19, 2025. To continue operations in the United States in compliance with the Act, the law requires that ByteDance sell the U.S. arm of the company such that it is no longer controlled by a company in a foreign adversary country. In the absence of a divestiture, U.S. companies that make the app available or provide hosting services for the app will face enforcement under the Act.

It remains to be seen how the Act will be enforced in light of the upcoming changes to the U.S. administration. TikTok has 170 million users in the United States.

FCC Adopts Report and Order Introducing New Fees Associated with the Robocall Mitigation Database

As I am sure you all know the Robocall Mitigation Database (RMD) was implemented to further the FCC’s efforts when it comes to protecting America’s networks from illegal robocalls and was birthed out of the TRACED Act. The RMD was put in place to monitor the traffic on our phone networks and to assist in compliance with the rules. While the FCC has expanded the types of service providers who need to file and the requirements, they still felt there were deficiencies with accuracy and up-to-date information. The newly adopted Report and Order is set to help finetune the RMD.

On December 30th the Commission adopted a Report and Order to further strengthen their efforts and fines and fees associated with the RMD. Companies that are submitting false or inaccurate information may face fines of up to $10,000 for each filing. While failing to keep your company information current might land you a $1,000 fine. There will now be a $100 filing fee associated with your RMD application along with an Annual Recertification filing fee of $100.

Aside from the fine and fees, there are a few additional developments with the RMD, see the complete list below.

  • Requiring prompt updates when a change to a provider’s information occurs (this must be updated within 10 business days or face a $1,000 fine)
  • Establishing a higher base forfeiture amount for providers submitting false or inaccurate information ($10,000 fine);
  • Creating a dedicated reporting portal for deficient filings;
  • Issuing substantive guidance and filer education;
  • Developing the use of a two factor authentication log-in solution; and
  • Requiring providers to recertify their Robocall Mitigation Database filings annually ($100).
  • Require providers to remit a filing fee for initial and subsequent annual submissions ($100)

Chairwoman Rosenworcel is quoted as saying “Companies using America’s phone networks must be actively involved in protecting consumers from scammers, we are tightening our rules to ensure voice service providers know their responsibilities and help stop junk robocalls. I thank my colleagues for their bipartisan support of this effort.”

The new fines and fees will become effective 30 days after publication in the CFR. While the remaining items are still under additional review. We will keep an eye on this and let you know once the Report and Order is published. Read the Report and Order here.

OCR Proposed Tighter Security Rules for HIPAA Regulated Entities, including Business Associates and Group Health Plans

As the healthcare sector continues to be a top target for cyber criminals, the Office for Civil Rights (OCR) issued proposed updates to the HIPAA Security Rule (scheduled to be published in the Federal Register January 6). It looks like substantial changes are in store for covered entities and business associates alike, including healthcare providers, health plans, and their business associates.

According to the OCR, cyberattacks against the U.S. health care and public health sectors continue to grow and threaten the provision of health care, the payment for health care, and the privacy of patients and others. In 2023, the OCR has reported that over 167 million people were affected by large breaches of health information, a 1002% increase from 2018. Further, seventy nine percent of the large breaches reported to the OCR in 2023 were caused by hacking. Since 2019, large breaches caused by successful hacking and ransomware attacks have increased 89% and 102%.

The proposed Security Rule changes are numerous and include some of the following items:

  • All Security Rule policies, procedures, plans, and analyses will need to be in writing.
  • Create, maintain a technology asset inventory and network map that illustrates the movement of ePHI throughout the regulated entity’s information systems on an ongoing basis, but at least once every 12 months.
  • More specificity needed for risk analysis. For example, risk assessments must be in writing and include action items such as identification of all reasonably anticipated threats to ePHI confidentiality, integrity, and availability and potential vulnerabilities to information systems.
  • 24 hour notice to regulated entities when a workforce member’s access to ePHI or certain information systems is changed or terminated.
  • Stronger incident response procedures, including: (I) written procedures to restore the loss of certain relevant information systems and data within 72 hours, (II) written security incident response plans and procedures, including testing and revising plans.
  • Conduct compliance audit every 12 months.
  • Business associates to verify Security Rule compliance to covered entities by a subject matter expert at least once every 12 months.
  • Require encryption of ePHI at rest and in transit, with limited exceptions.
  • New express requirements would include: (I) deploying anti-malware protection, and (II) removing extraneous software from relevant electronic information systems.
  • Require the use of multi-factor authentication, with limited exceptions.
  • Require review and testing of the effectiveness of certain security measures at least once every 12 months.
  • Business associates to notify covered entities upon activation of their contingency plans without unreasonable delay, but no later than 24 hours after activation.
  • Group health plans must include in plan documents certain requirements for plan sponsors: comply with the Security Rule; ensure that any agent to whom they provide ePHI agrees to implement the administrative, physical, and technical safeguards of the Security Rule; and notify their group health plans upon activation of their contingency plans without unreasonable delay, but no later than 24 hours after activation.

After reviewing the proposed changes, concerned stakeholders may submit comments to OCR for consideration within 60 days after January 6, by following the instructions outlined in the proposed rule. We support clients with respect to developing and submitting comments they wish to communicate to help shape the final rule, as well as complying with the requirements under the rule once made final.

Congress Passes Defense Bill with AI Provisions — AI: The Washington Report

  • On December 18, Congress passed the FY 2025 National Defense Authorization Act (NDAA), which includes a number of AI provisions. The NDAA is expected to be signed into law by President Biden.
  • The NDAA includes the first – and likely the only – AI provisions passed by the 118th Congress.
  • Although the Bipartisan Senate AI Working Group had called for comprehensive AI legislation earlier this year, the AI provisions in the NDAA do not substantially regulate the use of AI. Instead, they direct defense agencies to launch pilot programs and initiatives to support the adoption of AI by the government for defense purposes.

On Wednesday, the US Congress voted to pass with bipartisan support the 2025 National Defense Authorization Act (NDAA), which includes a number of AI provisions. The bill will now be sent to President Biden’s desk, where he is expected to sign it into law.In recent months, Senator Schumer (D-NY) and other Senators who had been determined to act on AI, as we covered, had increasingly touted the NDAA as the most likely pathway through which to pass AI legislation this Congress. And while the NDAA does include AI provisions, it does not include generally applicable provisions, for example, the comprehensive AI legislation that the Bipartisan Senate AI Working Group and other members called for this year, as we covered. Instead, the AI provisions in the NDAA direct defense agencies to launch pilot programs and initiatives to support the adoption of AI by the government for strategic and operational purposes.

AI in the NDAA

The NDAA includes a number of AI provisions that direct defense agencies to adopt the use of AI for strategic and operations purposes. The NDAA would:

  • Create a Chief Digital Engineering Recruitment and Management Officer: The NDAA appoints a Chief Digital Engineering Recruitment and Management Officer at the Department of Defense (DOD) who is charged with “clarifying the roles and responsibilities of the artificial intelligence workforce” at DOD.
  • Promote AI Education: The Act tasks the Chief Digital and AI Officer at DOD, with 180 days after the NDAA is enacted, to develop educational course on AI for members of DOD.
  • Identify AI National Security Risks: The NDAA modifies the existing responsibilities of the Chief Digital and AI Officer Governing Council to direct the council to identify AI models that “could pose a national security risk if accessed by an adversary of the United States” and “develop strategies to prevent unauthorized access” of such technologies. The NDAA also directs the Council to “make recommendations and relevant federal agencies for legislative action” on AI.
  • Harness AI for Auditing: The Act directs the secretaries of the different branches of the armed forces to “encourage” the use of AI for “facilitating audits of the financial statements of the Department of Defense.”
  • Improve the Human Usability of AI: The Under Secretary of Defense for Research and Engineering shall launch an initiative to “improve the human usability of artificial intelligence systems and information derived from such systems through the application of cognitive ergonomics techniques.”
  • Consider AI in Budgeting: Within 180 days of the NDAA’s enactment, the Chief Digital and Artificial Intelligence Officer at DOD shall make sure that budgets for AI “includes estimates for the types of data required to train, maintain, improve the artificial intelligence components or subcomponents contained within such programs.”

The Secretary of Defense’s AI Programs

The NDAA specifically directs the Secretary of Defense to launch a number of pilot programs and initiatives to accelerate the adoption and development of AI by DOD.

  • Pilot Program for Biotechnology and AI: The Secretary ofDefense shall launch a pilot program to “develop near-term use cases and demonstrations of artificial intelligence for national security-related biotechnology applications” with support for public-private partnerships within one year after the NDAA is enacted.
  • Pilot Program on AI Workflow Optimization: Within 60 days after the NDAA is enacted, the Secretary of Defense shall launch a pilot program to study and determine the feasibility of using AI to optimize the workflow and operations for DOD manufacturing facilities and contract administration services.
  • Multilateral AI Working Group: Within 90 days after the NDAA is passed, the Secretary of Defense shall form a working group “to develop and coordinate artificial intelligence initiatives among the allies and partners of the United States.”
  • Expanded DOD AI Capabilities: The Secretary of Defense shall establish a program “to meet the testing and processing requirements for next generation advanced artificial intelligence capabilities” at DOD installations. The Secretary is directed to expand the infrastructure of DOD for the “development and deployment of military applications of high-performing computing and artificial intelligence capabilities,” as well as develop “advanced artificial intelligence systems that have general-purpose military applications.”

Sense of Congress

The NDAA acknowledges both the potential strategic benefits that AI provides, as well as the risks its poses. The use of AI presents numerous advantages, from strengthening “the security of critical strategic communications” to improving “the efficiency of planning process to reduce the risk of collateral damage.” However, it is the sense of Congress that “particular care must be taken to ensure that the incorporation of artificial intelligence and machine learning tools does not increase the risk that our Nation’s most critical strategic assets can be compromised.”

The new Congress will have to start over from scratch, i.e., bills will have to be introduced or reintroduced, activity in the current Congress will be of no effect, and control of the Senate will pass to the Republicans. In this divided Congress, the efforts to pass AI regulations, largely led by Democrats, had always faced an uphill battle, complicated by partisan disagreements about the urgency with which to regulate AI and the need to better understand AI. With Republicans taking control of both chambers in January, Republicans in the next Congress are unlikely to push for the substantial AI regulation that we’ve seen proposed in the past year, instead favoring deregulation and investments in AI R&D. But just as AI continues to evolve, it remains to be seen how the next Congress will chart the future course of AI legislative activity.

Texas Attorney General Launches Investigation into 15 Tech Companies

Texas Attorney General Ken Paxton recently launched investigations into Character.AI and 14 other technology companies on allegations of failure to comply with the safety and privacy requirements of the Securing Children Online through Parental Empowerment (“SCOPE”) Act and the Texas Data Privacy and Security Act.

The SCOPE Act places guardrails on digital service providers, including AI companies, including with respect to sharing, disclosing and selling minors’ personal identifying information without obtaining permission from the child’s parent or legal guardian. Similarly, the Texas Data Privacy and Security Act imposes strict notice and consent requirements on the collection and use of minors’ personal data.

Attorney General Paxton reiterated the Office of the Attorney General’s (“OAG’s”) focus on privacy enforcement, with the current investigations launched as part of the OAG’s recent major data privacy and security initiative. Per that initiative, the Attorney General opened an investigation in June into multiple car manufacturers for illegally surveilling drivers, collecting driver data, and sharing it with their insurance companies. In July, Attorney General Paxton secured a $1.4 billion settlement with Meta over the unlawful collection and use of facial recognition data, reportedly the largest settlement ever obtained from an action brought by a single state. In October, the Attorney General filed a lawsuit against TikTok for SCOPE Act violations.

The Attorney General, in the OAG’s press release announcing the current investigations, stated that technology companies are “on notice” that his office is “vigorously enforcing” Texas’s data privacy laws.

For more on Texas Attorney General Investigations, visit the NLR Communications Media Internet and Consumer Protection sections.

“Don’t You Have to Look at What the Statute Says?” – IMC’s Oral Arguments

As we noted earlier on TCPAWorld, the IMC odds against the FCC might be better than initially thought due to the panel of judges from the Eleventh Circuit hearing the oral arguments. Oral argument recordings are available online.

And the panel did not disappoint in pushing back on the FCC.

The conversation hinged on the FCC’s power to implement regulations in furtherance of the TCPA’s statutory language. This is important because the FCC is limited to implementation, and they are do not have the authority “to rewrite the statute” as was mentioned in the oral arguments.

Judge Luck (HERE) had some concerns with the FCC’s limitations on the consumer’s ability to consent. The statute, according to Luck, intends to allow consumers to agree to receive calls. If that is the case, then a limitation of the consumer’s ability to exercise their rights is an attempt to rewrite the statute.

Luck agreed that implementing the statute is fine, but limiting the right of consumers to receive calls they consent to receive is over reach. Luck continued “Just because you [the FCC] are ineffective at enforcing the authority doesn’t mean you have the right to limit one’s right, a statutory right, or rewrite those rights to limit what it means.”

The FCC attempted to argue that implementation of statute by their very nature is going to lead to restriction, but Judge Luck pushed back on that. According to Luck, there are ways to implement statutes that don’t restrict a consumer’s statutory rights. This exchange was also telling:

LUCK: Without the regulation do you agree with me that the statute would allow it?

FCC: Yes.

LUCK: If so, then it’s not an implementation. It’s a restriction.

Luck was not the only Judge who pushed back on the FCC. Judge Branch (I believe because she was not identified) also strongly pushed back on the FCC’s restriction on topically and logically associated as an element of consent. Branch stated that the FCC was looking at consumer behavior and essentially stated too many consumers didn’t know what they were doing in giving consent. The FCC stated “I think we have to look at how the industry was operating…” only to be interrupted by Branch who questioned that statement by asking “Don’t you have to look at what the statute says?”

YIKES.

Finally, the FCC’s turn in oral argument ended with this exchange:

JUDGE: Perhaps the question should be “We have a problem here. We should talk to Congress about it.”

FCC: Congress did task the agency to implement here.

JUDGE: It’s given you power to implement, not carte blanche.

DOUBLE YIKES.

There was also a conversation around whether or not the panel should issue a stay in this case. The IMC argued that yes – a stay was appropriate due to the uncertainty in the market.

It’s pretty clear that the judges questioned the statutory authority of the FCC to implement the 1:1 consent and the topically and logically related portions of the definition of prior express written consent.

While we don’t have a definitive answer yet on this issue, we do know this is going to be a lot more interesting than everyone thought before the oral arguments.

We will keep you up to date on this and we will have more information soon.

CFPB Takes Aim at Data Brokers in Proposed Rule Amending FCRA

On December 3, the CFPB announced a proposed rule to enhance oversight of data brokers that handle consumers’ sensitive personal and financial information. The proposed rule would amend Regulation V, which implements the Fair Credit Reporting Act (FCRA), to require data brokers to comply with credit bureau-style regulations under FCRA if they sell income data or certain other financial information on consumers, regardless of its end use.

Should this rule be finalized, the CFPB would be empowered to enforce the FCRA’s privacy protections and consumer safeguards in connection with data brokers who leverage emerging technologies that became prevalent after FCRA’s enactment.

What are some of the implications of the new rule?

  • Data Brokers are Now Considered CRAs. The proposed rule defines the circumstances under which companies handling consumer data would be considered CRAs by clarifying the definition of “consumer reports.” The rule specifies that data brokers selling any of four types of consumer information—credit history, credit score, debt payments, or income/financial tier data—would generally be considered to be selling a consumer report.
  • Assembling Information About Consumers Means You are a CRA. Under the rule, an entity is a CRA if it assembles or evaluates information about consumers, including by collecting, gathering, or retaining; assessing, verifying, validating; or contributing to or altering the content of such information. This view is in step with the Bureau’s recent Circular on AI-based background dossiers of employees. (See our prior discussion here.)
  • Header Information is Now a Consumer Report. Under the proposed rule, communications from consumer reporting agencies of certain personal identifiers that they collect—such as name, addresses, date of birth, Social Security numbers, and phone numbers—would be consumer reports. This would mean that consumer reporting agencies could only sell such information (typically referred to as “credit header” data) if the user had a permissible purpose under the FCRA.
  • Marketing is Not a Legitimate Business Need. The proposed rule emphasizes that marketing is not a “legitimate business need” under the FCRA. Accordingly, CRAs could not use consumer reports to decide for an advertiser which consumers should receive ads and would not be able to send ads to consumers on an advertiser’s behalf.
  • Enhanced Disclosure and Consent Requirements. Under the FCRA, consumers can give their consent to share data. Under the proposed rule, the Bureau clarified that consumers must be provided a clear and conspicuous disclosure stating how their consumer report will be used. It would also require data brokers to acknowledge a consumer’s right to revoke their consent. Finally, the proposed rule requires a new and separate consumer authorization for each product or service authorized by the consumer. The Bureau is focused on instances where a customer signs up for a specific product or service, such as credit monitoring, but then receives targeted marketing for a completely different product.

Comments on the rule must be received on or before March 3, 2025.

Putting It Into Practice: With the release of the rule so close to the end of Director Chopra’s term, it will be interesting to see what a new administration does with it. We expect a new CFPB director to scale back and rescind much of the informal regulatory guidance that was issued by the Biden administration. However, some aspects of the data broker rule have bipartisan support so we may see parts of it finalized in 2025.

…But Wait, There’s More!

In 2025, eight additional U.S. state privacy laws will go into effect, joining California, Colorado, Connecticut, Montana, Oregon, Texas, Utah, and Virginia:

  1. Delaware Personal Data Privacy Act (effective Jan. 1, 2025)
  2. Iowa Consumer Data Protection Act (effective Jan. 1, 2025)
  3. Nebraska Data Privacy Act (effective Jan. 1, 2025)
  4. New Hampshire Privacy Act (effective Jan. 1, 2025)
  5. New Jersey Data Privacy Act (effective Jan. 15, 2025)
  6. Tennessee Information Protection Act (effective July 1, 2025)
  7. Minnesota Consumer Data Privacy Act (effective July 31, 2025)
  8. Maryland Online Data Privacy Act (effective Oct. 1, 2025)

While many of these eight state privacy laws are similar to current privacy laws in effect, there are some noteworthy differences that you will need to be mindful of heading into the New Year. Additionally, if you did not take Texas, Oregon and Montana into consideration in 2024, now is the time to do so!

Here is a roadmap of key considerations as you address these additional state privacy laws.

1. Understand What Laws Apply to Your Organization

To help determine what laws apply to your organization, you need to know the type and quantity of personal data you collect and how it is used. Each of the eight new state laws differ with their scope of application, as their thresholds vary based on the 1) number of state residents whose personal data controlled or processed and 2) the percentage of revenue a controller derives from the sale of personal data.

Delaware, New Hampshire, and Maryland have the lowest processing threshold – 35,000 consumers.

Nebraska’s threshold requirements are similar to Texas’ threshold requirements: the law applies to any organization that operates in the state, processes or sells personal data, and is not classified as a small business as defined by the U.S. Small Business Administration.

Notably, Maryland and Minnesota will apply to non-profits, except for those that fall into a narrow exception.

See our chart at the end of this article for ease of reference.

2. Identify Nuances

Organizations will need to pay particular attention to Maryland’s data minimization requirements as it is the strictest of the eight. Under Maryland, controllers will have unique obligations to meet, including the following:

  • Limit the collection or processing of sensitive data to what is “reasonably necessary and proportionate to provide or maintain a specific product or service requested by the consumer to whom the data pertains.”
  • Cannot process minors’ (under 18 years old) personal data for targeted advertising.
  • A broad prohibition on the sale of sensitive data.

If a controller engages in the sale of sensitive data, under Texas’ privacy law, which went into effect in July 2024, requires controllers to include the following notice in the same place your privacy policy is linked: “NOTICE: We may sell your sensitive personal data.” Similarly, if a controller engages in the sale of biometric personal data, the following notice must be included in the privacy policy: “NOTICE: We may sell your biometric personal data.” Nebraska requires companies to obtain opt-in consent before selling sensitive data. Maryland prohibits the sale of sensitive data altogether.

Minnesota takes data inventory a step further, requiring companies to maintain an inventory of personal data processed and document and maintain a description of the policies and procedures that they adopt to comply with the act.

3. Refine Privacy Rights Management

All states provide consumers with the right to access, delete, correct (except Iowa), and obtain a copy of their personal data.

Minnesota’s law provides consumers with two additional rights:

  1. The right to request the specific third parties to whom a business has disclosed personal data. Controllers may choose to respond to such a request either by providing the names of the specific third parties to which it has disclosed the consumer’s personal data or the name of third parties to which it has disclosed any personal data.
  2. The right to question the results of a controller’s profiling, to the extent it produced legal effects. Consumers will have the right to be informed of the reason that the profiling resulted in a specific decision and be informed of the actions the consumers may take to secure a different decision in the future.

Aligning with California and Utah, Iowa requires controllers to provide notice and an opportunity to opt out of the processing of sensitive data.

Interestingly, Iowa does not affirmatively establish a right to opt-out of online targeted advertising.

4. Conduct Data Privacy Impact Assessments

Most state privacy laws require controllers to conduct data privacy impact assessments for high-risk processing activities such as the sale of personal data, targeted advertising, profiling, and sensitive data processing. Nebraska, Tennessee, Minnesota, and Maryland follow Oregon by including any processing activities that present a heightened risk of harm to a consumer. Maryland takes this a step further in requiring the assessment include an assessment of each algorithm that is used.

5. Update Privacy Notices

All state privacy laws require privacy notices at the time of collecting personal data. It is essential you keep your privacy notice up-to-date and ensure (at a bare minimum) it covers data categories, third-party sharing, consumer privacy rights options, and opt-out procedures. Minnesota also requires controllers to provide a “reasonably accessible, clear, and meaningful” online privacy notice, posted on its homepage using a hyperlink that contains the word “privacy.”

As state privacy laws stack up, having a structured, adaptable, and principles-based approach paves the path to sustainable compliance.

Make 2025 the year your privacy program doesn’t just meet the minimum—it excels.

Click here to view the 2025 US State Privacy Laws Applicability Chart

Public Urged to Use Encryption for Mobile Phone Messaging and Calls

On December 4, 2024, four of the five members of the Five Eyes intelligence-sharing group (the United States, Australia, Canada, and New Zealand) law enforcement and cyber security agencies (Agencies) published a joint guide for network engineers, defenders of communications infrastructure and organizations with on-premises enterprise equipment (the Guide). The Agencies strongly encourage applying the Guide’s best practices to strengthen visibility and strengthen network devices against exploitation by reported hackers, including those hackers affiliated with the People’s Republic of China (PRC). The fifth group member, the United Kingdom, released a statement supportive of the joint guide but stated it had alternate methods of mitigating cyber risks for its telecom providers.

In November 2024, the Federal Bureau of Investigation (FBI) and the U.S. Cybersecurity and Infrastructure Security Agency (CISA) issued a joint statement to update the public on its investigation into the previously reported PRC-affiliated hacks on multiple telecommunications companies’ networks. The FBI and CISA reported that these hacks appeared to focus on cell phone activity of individuals involved in political or government activity and copies of law enforcement informational requests subject to court orders. However, at the time of the update, these U.S. agencies and members of Congress have underscored the broad and significant nature of this breach. At least one elected official stated that the hacks potentially expose unencrypted cell phone conversations with someone in America to the hackers.

In particular, the Guide recommends adopting actions that quickly identify anomalous behavior, vulnerabilities, and threats and respond to a cyber incident. It also guides telecoms and businesses to reduce existing vulnerabilities, improve secure configuration habits, and limit potential entry points. One of the Guide’s recommended best practices attracting media attention is ensuring that mobile phone messaging and call traffic is fully end-to-end encrypted to the maximum extent possible. Without fully end-to-end encrypted messaging and calls, the content of calls and messages always has the potential to be intercepted. Android to Android messaging and iPhone to iPhone messaging is fully end-to-end encrypted but messaging from an Android to an iPhone is not currently end-to-end encrypted. Google and Apple recommend using a fully encrypted messaging app to better protect the content of messages from hackers.

The FBI and CISA are continuing to investigate the hacks and will update the public as the investigation permits. In the interim, telecom providers and companies are encouraged to adopt the Guide’s best practices and to report any suspicious activity to their local FBI field office or the FBI’s Internet Crime Complaint Center. Cyber incidents may also be reported to CISA.