FTC: Three Enforcement Actions and a Ruling

In today’s digital landscape, the exchange of personal information has become ubiquitous, often without consumers fully comprehending the extent of its implications.

The recent actions undertaken by the Federal Trade Commission (FTC) shine a light on the intricate web of data extraction and mishandling that pervades our online interactions. From the seemingly innocuous permission requests of game apps to the purported protection promises of security software, consumers find themselves at the mercy of data practices that blur the lines between consent and exploitation.

The FTC’s proposed settlements with companies like X-Mode Social (“X Mode”) and InMarket, two data aggregators, and Avast, a security software company, underscores the need for businesses to appropriately secure and limit the use of consumer data, including previously considered innocuous information such as browsing and location data. In a world where personal information serves as currency, ensuring consumer privacy compliance has never been more critical – or posed such a commercial risk for failing to get it right.

X-Mode and InMarket Settlements: The proposed settlements with X-Mode and InMarket concern numerous allegations based on the mishandling of consumers’ location data. Both companies supposedly collected precise location data through their own mobile apps and those of third parties (through software development kits).  X-Mode is alleged to have sold precise location data (advertised as being 70% accurate within 20 meters or less) linked to timestamps and unique persistent identifiers (i.e., names, email addresses, etc.) of its consumers to private government contractors without obtaining proper consent. Plotting this data on a map makes it easy to reveal each person’s movements over time.

InMarket purportedly utilized location data to cross-reference such data with points of interest to sort consumers into particularized audience segments for targeted advertising purposes without adequately informing consumers – examples of audience segments include parents of preschoolers, Christian church attendees, and “wealthy and not healthy,” among other groupings.

Avast Settlement: Avast, a security software company, allegedly sold granular and re-identifiable browsing information of its consumers despite assuring consumers it would protect their privacy. Avast allegedly collected extensive browsing data of its consumers through its antivirus software and browser extensions while ensuring its consumers that their browsing data would only be used in aggregated and anonymous form. The data collected by Avast revealed visits to various websites that could be attributed to particular people and allowed for inferences to be drawn about such individuals – examples include academic papers on symptoms of breast cancer, education courses on tax exemptions, government jobs in Fort Meade, Maryland with a salary over $100,000, links to FAFSA applications and directions from one location to another, among others.

Sensitivity of Browsing and Location Data

It is important to note that none of the underlying datasets in question contained traditional types of personally identifiable information (e.g., name, identification numbers, physical descriptions, etc.) (“PII”). Even still, the three proposed settlements by the FTC underscore the sensitive nature of browsing and location data due to the insights such data reveals, such as religious beliefs, health conditions, and financial status, and the ease with which the insights can be linked to certain individuals.

In the digital age, the amount of data available about individuals online and collected by various companies makes the re-identification of individuals easier every day. Even when traditional PII is not included in a data set, by linking sufficient data points, a profile or understanding of an individual can be created. When such profile is then linked to an identifier (such as username, phone number, or email address provided when downloading an app or setting up an account on an app) and cross-referenced with various publicly available data, such as name, email, phone number or content on social media sites, it can allow for deep insights into an individual. Despite the absence of traditional types of PII, such data poses significant privacy risks due to the potential for re-identification and the intimate details about individuals’ lives that it can divulge.

The FTC emphasizes the imperative for companies to recognize and treat browsing and location data as sensitive information and implement appropriate robust safeguards to protect consumer privacy. This is especially true when the data set includes information with the precision of those cited by the FTC in its proposed settlements.

Accountability and Consent

With browsing and location data, there is also a concern that the consumer may not be fully aware of how their data is used. For instance, Avast claimed to protect consumers’ browsing data and then sold that very same browsing information, often without notice to consumers. When Avast did inform customers of their practices, the FTC claims it deceptively stated any sharing would be “anonymous and aggregated.” Similarly, X-Mode claimed it would use location data for ad-personalization and location-based analytics. Consumers were unaware such location data was also sold to government contractors.

The FTC has recognized that a company may need to process an individual’s information to provide them with services or products requested by the individual. The FTC also holds that such processing does not mean the company is then free to collect, access, use, or transfer that information for other purposes (e.g., marketing, profiling, background screening, etc.). Essentially, purpose matters. As the FTC explains, a flashlight app provider cannot collect, use, store, or share a user’s precise geolocation data, or a tax preparation service cannot use a customer’s information to market other products or services.

If companies want to use consumer personal information for purposes other than providing the requested product or services, the FTC states that companies should inform consumers of such uses and obtain consent to do so.

The FTC aims to hold companies accountable for their data-handling practices and ensure that consumers are provided with meaningful consent mechanisms. Companies should handle consumer data only for the purposes for which data was collected and honor their privacy promises to consumers. The proposed settlements emphasize the importance of transparency, accountability, meaningful consent, and the prioritization of consumer privacy in companies’ data handling practices.

Implementing and Maintaining Safeguards

Data, especially specific data that provide insights and inferences about individuals, is extremely valuable to companies, but it is that same data that exposes such individuals’ privacy. Companies that sell or share information sometimes include limitations for the use of the data, but not all contracts have such restrictions or sufficient restrictions to safeguard individuals’ privacy.

For instance, the FTC alleges that some of Avast’s underlying contracts did not prohibit the re-identification of Avast’s users. Where Avast’s underlying contracts prohibited re-identification, the FTC alleges that purchasers of the data were still able to match Avast users’ browsing data with information from other sources if the information was not “personally identifiable.” Avast also failed to audit or confirm that purchasers of data complied with its prohibitions.

The proposed complaint against X-Mode recognized that at least twice, X-Mode sold location data to purchasers who violated restrictions in X-Mode’s contracts by reselling the data they bought from X-Mode to companies further downstream. The X-Mode example shows that even when restrictions are included in contracts, they may not prevent misuse by subsequent downstream parties.

Ongoing Commitment to Privacy Protection:

The FTC stresses the importance of obtaining informed consent before collecting or disclosing consumers’ sensitive data, as such data can violate consumer privacy and expose them to various harms, including stigma and discrimination. While privacy notices, consent, and contractual restrictions are important, the FTC emphasizes they need to be backed up by action. Accordingly, the FTC’s proposed orders require companies to design, implement, maintain, and document safeguards to protect the personal information they handle, especially when it is sensitive in nature.

What Does a Company Need To Do?

Given the recent enforcement actions by the FTC, companies should:

  1. Consider the data it collects and whether such data is needed to provide the services and products requested by the consumer and/or a legitimate business need in support of providing such services and products (e.g., billing, ongoing technical support, shipping);
  2. Consider browsing and location data as sensitive personal information;
  3. Accurately inform consumers of the types of personal information collected by the company, its uses, and parties to whom it discloses the personal information;
  4. Collect, store, use, or share consumers’ sensitive personal information (including browser and location data) only with such consumers’ informed consent;
  5. Limit the use of consumers’ personal information solely to the purposes for which it was collected and not market, sell, or monetize consumers’ personal information beyond such purpose;
  6. Design, Implement, maintain, document, and adhere to safeguards that actually maintain consumers’ privacy; and
  7. Audit and inspect service providers and third-party companies downstream with whom consumers’ data is shared to confirm they are (a) adhering to and complying with contractual restrictions and (b) implementing appropriate safeguards to protect such consumer data.

The Imperatives of AI Governance

If your enterprise doesn’t yet have a policy, it needs one. We explain here why having a governance policy is a best practice and the key issues that policy should address.

Why adopt an AI governance policy?

AI has problems.

AI is good at some things, and bad at other things. What other technology is linked to having “hallucinations”? Or, as Sam Altman, CEO of OpenAI, recently commented, it’s possible to imagine “where we just have these systems out in society and through no particular ill intention, things just go horribly wrong.”

If that isn’t a red flag…

AI can collect and summarize myriad information sources at breathtaking speed. Its ability to reason from or evaluate that information, however, consistent with societal and governmental values and norms, is almost non-existent. It is a tool – not a substitute for human judgment and empathy.

Some critical concerns are:

  • Are AI’s outputs accurate? How precise are they?
  • Does it use PII, biometric, confidential, or proprietary data appropriately?
  • Does it comply with applicable data privacy laws and best practices?
  • Does it mitigate the risks of bias, whether societal or developer-driven?

AI is a frontier technology.

AI is a transformative, foundational technology evolving faster than its creators, government agencies, courts, investors and consumers can anticipate.

AI is a transformative, foundational technology evolving faster than its creators, government agencies, courts, investors and consumers can anticipate.

In other words, there are relatively few rules governing AI—and those that have been adopted are probably out of date. You need to go above and beyond regulatory compliance and create your own rules and guidelines.

And the capabilities of AI tools are not always foreseeable.

Hundreds of companies are releasing AI tools without fully understanding the functionality, potential and reach of these tools. In fact, this is somewhat intentional: at some level, AI’s promise – and danger – is its ability to learn or “evolve” to varying degrees, without human intervention or supervision.

AI tools are readily available.

Your employees have access to AI tools, regardless of whether you’ve adopted those tools at an enterprise level. Ignoring AI’s omnipresence, and employees’ inherent curiosity and desire to be more efficient, creates an enterprise level risk.

Your customers and stakeholders demand transparency.

The policy is a critical part of building trust with your stakeholders.

Your customers likely have two categories of questions:

How are you mitigating the risks of using AI? And, in particular, what are you doing with my data?

And

Will AI benefit me – by lowering the price you charge me? By enhancing your service or product? Does it truly serve my needs?

Your board, investors and leadership team want similar clarity and direction.

True transparency includes explainability: At a minimum, commit to disclose what AI technology you are using, what data is being used, and how the deliverables or outputs are being generated.

What are the key elements of AI governance?

Any AI governance policy should be tailored to your institutional values and business goals. Crafting the policy requires asking some fundamental questions and then delineating clear standards and guidelines to your workforce and stakeholders.

1. The policy is a “living” document, not a one and done task.

Adopt a policy, and then re-evaluate it at least semi-annually, or even more often. AI governance will not be a static challenge: It requires continuing consideration as the technology evolves, as your business uses of AI evolve, and as legal compliance directives evolve.

2. Commit to transparency and explainability.

What is AI? Start there.

Then,

What AI are you using? Are you developing your own AI tools, or using tools created by others?

Why are you using it?

What data does it use? Are you using your own datasets, or the datasets of others?

What outputs and outcomes is your AI intended to deliver?

3. Check the legal compliance box.

At a minimum, use the policy to communicate to stakeholders what you are doing to comply with applicable laws and regulations.

Update the existing policies you have in place addressing data privacy and cyber risk issues to address AI risks.

The EU recently adopted its Artificial Intelligence Act, the world’s first comprehensive AI legislation. The White House has issued AI directives to dozens of federal agencies. Depending on the industry, you may already be subject to SEC, FTC, USPTO, or other regulatory oversight.

And keeping current will require frequent diligence: The technology is rapidly changing even while the regulatory landscape is evolving weekly.

4. Establish accountability. 

Who within your company is “in charge of” AI? Who will be accountable for the creation, use and end products of AI tools?

Who will manage AI vendor relationships? Is their clarity as to what risks will be borne by you, and what risks your AI vendors will own?

What is your process for approving, testing and auditing AI?

Who is authorized to use AI? What AI tools are different categories of employees authorized to use?

What systems are in place to monitor AI development and use? To track compliance with your AI policies?

What controls will ensure that the use of AI is effective, while avoiding cyber risks and vulnerabilities, or societal biases and discrimination?

5. Embrace human oversight as essential.

Again, building trust is key.

The adoption of a frontier, possibly hallucinatory technology is not a build it, get it running, and then step back process.

Accountability, verifiability, and compliance require hands on ownership and management.

If nothing else, ensure that your AI governance policy conveys this essential.

Another Lesson for Higher Education Institutions about the Importance of Cybersecurity Investment

Key Takeaway

A Massachusetts class action claim underscores that institutions of higher education will continue to be targets for cybercriminals – and class action plaintiffs know it.

Background

On January 4, 2023, in Jackson v. Suffolk University, No. 23-cv-10019, Jackson (Plaintiff) filed a proposed class action lawsuit in the U.S. District Court for the District of Massachusetts against her alma matter, Suffolk University (Suffolk), arising from a data breach affecting thousands of current and former Suffolk students.

The complaint alleges that an unauthorized party gained access to Suffolk’s computer network on or about July 9, 2022.  After learning of the unauthorized access, Suffolk engaged cybersecurity experts to assist in an investigation. Suffolk completed the investigation on November 14, 2022.  The investigation concluded that an unauthorized third party gained access to and/or exfiltrated files containing personally identifiable information (PII) for students who enrolled after 2002.

The complaint further alleges that the PII exposed in the data breach included students’ full names, Social Security Numbers, Driver License numbers, state identification numbers, financial account information, and Protected Health Information.  While Suffolk did not release the total number of students affected by the data breach, the complaint alleges that approximately 36,000 Massachusetts residents were affected.  No information was provided about affected out-of-state residents.

Colleges and Universities are Prime Targets for Cybercriminals

Unfortunately, Suffolk’s data breach is not an outlier.  Colleges and universities present a wealth of opportunities for cyber criminals because they house massive amounts of sensitive data, including employee and student personal and financial information, medical records, and confidential and proprietary data.  Given how stolen data can be sold through open and anonymous forums on the Dark Web, colleges and universities will continue to remain prime targets for cybercriminals.

Recognizing this, the FBI issued a warning for higher education institutions in March 2021, informing them that cybercriminals have been targeting institutions of higher education with ransomware attacks.  In May 2022, the FBI issued a second alert, warning that cyber bad actors continue to conduct attacks against colleges and universities.

Suffolk Allegedly Breached Data Protection Duty

In the complaint, Plaintiff alleges that Suffolk did not follow industry and government guidelines to protect student PII.  In particular, Plaintiff alleges that Suffolk’s failure to protect student PII is prohibited by the Federal Trade Commission Act, 15 U.S.C.A. § 45 and that Suffolk failed to comply with the Financial Privacy Rule of the Gramm-Leach-Bliley Act (GLBA),  15 U.S.C.A. § 6801.  Further, the suit alleges that Suffolk violated the Massachusetts Right to Privacy Law, Mass. Gen. Laws Ann. ch. 214, § 1B, as well as its common law duties.

How Much Cybersecurity is Enough?

To mitigate cyber risk, colleges and university must not only follow applicable government guidelines but also  consider following industry best practices to protect student PII.

In particular, GLBA requires a covered organization to designate a qualified individual to oversee its information security program and conduct risk assessments that continually assess internal and external risks to the security, confidentiality and integrity of personal information.  After the risk assessment, the organization must address the identified risks and document the specific safeguards intended to address those risks.  See 16 CFR § 314.4.  

Suffolk, as well as other colleges and universities, may also want to look to Massachusetts law for guidance about how to further invest in its cybersecurity program.  Massachusetts was an early leader among U.S. states when, in 2007, it enacted the “Regulations to safeguard personal information of commonwealth residents” (Mass. Gen. Laws ch. 93H § 2) (Data Security Law).  The Data Security Law – still among the most prescriptive general data security state law – sets forth a list of minimum requirements that, while not specific to colleges and universities, serves as a good cybersecurity checklist for all organizations:

  1. Designation of one or more employees responsible for the WISP.
  2. Assessments of risks to the security, confidentiality and/or integrity of organizational Information and the effectiveness of the current safeguards for limiting those risks, including ongoing employee and independent contractor training, compliance with the WISP and tools for detecting and preventing security system failures.
  3. Employee security policies relating to protection of organizational Information outside of business premises.
  4. Disciplinary measures for violations of the WISP and related policies.
  5. Access control measures that prevent terminated employees from accessing organizational Information.
  6. Management of service providers that access organizational Information as part of providing services directly to the organization, including retaining service providers capable of protecting organizational Information consistent with the Data Security Regulations and other applicable laws and requiring service providers by contract to implement and maintain appropriate measures to protect organizational Information.
  7. Physical access restrictions for records containing organizational Information and storage of those records in locked facilities, storage areas or containers.
  8. Regular monitoring of the WISP to ensure that it is preventing unauthorized access to or use of organizational Information and upgrading the WISP as necessary to limit risks.
  9. Review the WISP at least annually or more often if business practices that relate to the protection of organizational Information materially change.
  10. Documentation of responsive actions taken in connection with any “breach of security” and mandatory post-incident review of those actions to evaluate the need for changes to business practices relating to protection of organizational Information.

An organization not implementing any of these controls should consider documenting the decision-making process as a defensive measure.  In implementing these requirements and recommendations, colleges and universities can best position themselves to thwart cybercriminals and plaintiffs alike.

© Copyright 2023 Squire Patton Boggs (US) LLP