Privacy Tip #359 – GoodRx Settles with FTC for Sharing Health Information for Advertising

The Federal Trade Commission (FTC) announced on February 1, 2023 that it has settled, for $1.5M, its first enforcement action under its Health Breach Notification Rule against GoodRx Holdings, Inc., a telehealth and prescription drug provider.

According to the press release, the FTC alleged that GoodRx failed “to notify consumers and others of its unauthorized disclosures of consumers’ personal health information to Facebook, Google, and other companies.”

In the proposed federal court order (the Order), GoodRx will be “prohibited from sharing user health data with applicable third parties for advertising purposes.” The complaint alleged that GoodRx told consumers that it would not share personal health information, and it monetized users’ personal health information by sharing consumers’ information with third parties such as Facebook and Instagram to help target users with ads for personalized health and medication-specific ads.

The complaint also alleged that GoodRx “compiled lists of its users who had purchased particular medications such as those used to treat heart disease and blood pressure, and uploaded their email addresses, phone numbers, and mobile advertising IDs to Facebook so it could identify their profiles. GoodRx then used that information to target these users with health-related advertisements.” It also alleges that those third parties then used the information received from GoodRx for their own internal purposes to improve the effectiveness of the advertising.

The proposed Order must be approved by a federal court before it can take effect. To address the FTC’s allegations, the Order prohibits the sharing of health data for ads; requires user consent for any other sharing; stipulates that the company must direct third parties to delete consumer health data; limits the retention of data; and implement a mandated privacy program. Click here to read the press release.

Copyright © 2023 Robinson & Cole LLP. All rights reserved.

The Metaverse: A Legal Primer for the Hospitality Industry

The metaverse, regarded by many as the next frontier in digital commerce, does not, on its surface, appear to offer many benefits to an industry with a core mission of providing a physical space for guests to use and occupy. However, there are many opportunities that the metaverse may offer to owners, operators, licensors, managers, and other participants in the hospitality industry that should not be ignored.

What is the Metaverse?

The metaverse is a term used to describe a digital space that allows social interactions, frequently through use of a digital avatar by the user. Built largely using decentralized, blockchain technology instead of centralized servers, the metaverse consists of immersive, three-dimensional experiences, persistent and traceable digital assets, and a strong social component. The metaverse is still in its infancy, so many of the uses for the metaverse remain aspirational; however, metaverse platforms have already seen a great deal of activity and commerce. Meanwhile, technology companies are working to produce the next-generation consumer electronics that they hope will make the metaverse a more common location for commerce.

The Business Case for the Hospitality Industry

The hospitality industry may find the metaverse useful in enhancing marketing and guest experiences.

Immersive virtual tours of hotel properties and the surrounding area may allow potential customers to explore all aspects of the property and its surroundings before booking. Operators may also add additional booking options or promotions within the virtual tour to increase exposure to customers.

Creating hybrid, in-person and remote events, such as conferences, weddings, or other celebrations, is also possible through the metaverse. This would allow guests on-site to interact with those who are not physically present at the property for an integrated experience and possible additional revenue streams.

Significantly, numerous outlets have identified the metaverse as one of the top emerging trends in technology. As its popularity grows, the metaverse will become an important location for the hospitality industry to interact with and market to its customer base.

Legal Issues to Consider

  1. Select the right platform for you. There are multiple metaverse platforms, and they all have tradeoffs. Some, including Roblox and Fortnite, offer access to more consumers but generally give businesses less control over content within the programs. Others, such as Decentraland and the Sandbox, provide businesses with greater control but smaller audiences and higher barriers to entry. Each business should consider who its target audience is, what platform will be best to reach that audience, and its long term metaverse strategy before committing to a particular platform.
  2. Register your IP. Businesses should consider filing trademark applications covering core metaverse goods or services and securing any available blockchain domains, which can be used to facilitate metaverse payments and to direct users to blockchain content, such as websites and decentralized applications. Given the accelerating adoption of blockchain domains along with limited dispute resolution recourse available, we strongly encourage businesses to consider securing intellectual property rights now.
  3. Establish a dedicated legal entity. Businesses may want to consider setting up a new subsidiary or affiliate to hold digital assets, shield other parts of their business from metaverse-related liability, and isolate the potential tax consequences.
  4. Take custody of digital assets. Because of their digital character, digital assets such as cryptocurrency, which may be the primary method of payment in the metaverse, are uniquely vulnerable to loss and theft. Before acquiring cryptocurrency, businesses will need to set up a secure blockchain wallet and adopt appropriate access and security controls.
  5. Protect and enforce your IP. The decentralized nature of the metaverse poses a significant challenge to businesses and intellectual property owners. Avenues for enforcing intellectual property rights in the metaverse are constantly evolving and may require multiple tools to stop third-party infringements.
  6. Reserve metaverse rights. Each Business that licenses its IP, particularly those that do so on a geographic or territorial basis, should review existing license agreements to determine what rights, if any, its licensees have for metaverse-related uses. Moving forward, each brand owner is encouraged to expressly reserve rights for metaverse-related uses and exercise caution before authorizing any third party to deploy IP to the metaverse on a business’ behalf.
  7. Tax matters. Attention needs to be paid to how the tax law applies to metaverse transactions, despite the current tax law not fully addressing the metaverse. This is particularly the case for state and local sales and use, communications, and hotel taxes.

Ready to Enter?

As we move into the future, the metaverse appears poised to provide a tremendous opportunity for the hospitality industry to connect directly with consumers in an interactive way that was until recently considered science fiction. But like every new frontier, technological or otherwise, there are legal and regulatory hurdles to consider and overcome.

© 2022 ArentFox Schiff LLP

Texas AG Sues Meta Over Collection and Use of Biometric Data

On February 14, 2022, Texas Attorney General Ken Paxton brought suit against Meta, the parent company of Facebook and Instagram, over the company’s collection and use of biometric data. The suit alleges that Meta collected and used Texans’ facial geometry data in violation of the Texas Capture or Use of Biometric Identifier Act (“CUBI”) and the Texas Deceptive Trade Practices Act (“DTPA”). The lawsuit is significant because it represents the first time the Texas Attorney General’s Office has brought suit under CUBI.

The suit focuses on Meta’s “tag suggestions” feature, which the company has since retired. The feature scanned faces in users’ photos and videos to suggest “tagging” (i.e., identify by name) users who appeared in the photos and videos. In the complaint, Attorney General Ken Paxton alleged that Meta,  collected and analyzed individuals’ facial geometry data (which constitutes biometric data under CUBI) without their consent, shared the data with third parties, and failed to destroy the data in a timely matter, all in violation of CUBI and the DTPA. CUBI regulates the collection and use of biometric data for commercial purposes, and the DTPA prohibits false, misleading, or deceptive acts or practices in the conduct of any trade or commerce.

Among other forms of relief, the complaint seeks an injunction enjoining Meta from violating these laws, a $25,000 civil penalty for each violation of CUBI, and a $10,000 civil penalty for each violation of the DTPA. The suit follows Facebook’s $650 million class-action settlement over alleged violations of Illinois’ Biometric Privacy Act and the company’s discontinuance of the tag suggestions feature last year.

Copyright © 2022, Hunton Andrews Kurth LLP. All Rights Reserved.

In the Coming ‘Metaverse’, There May Be Excitement but There Certainly Will Be Legal Issues

The concept of the “metaverse” has garnered much press coverage of late, addressing such topics as the new appetite for metaverse investment opportunities, a recent virtual land boom, or just the promise of it all, where “crypto, gaming and capitalism collide.”  The term “metaverse,” which comes from Neal Stephenson’s 1992 science fiction novel “Snow Crash,” is generally used to refer to the development of virtual reality (VR) and augmented reality (AR) technologies, featuring a mashup of massive multiplayer gaming, virtual worlds, virtual workspaces, and remote education to create a decentralized wonderland and collaborative space. The grand concept is that the metaverse will be the next iteration of the mobile internet and a major part of both digital and real life.

Don’t feel like going out tonight in the real world? Why not stay “in” and catch a show or meet people/avatars/smart bots in the metaverse?

As currently conceived, the metaverse, “Web 3.0,” would feature a synchronous environment giving users a seamless experience across different realms, even if such discrete areas of the virtual world are operated by different developers. It would boast its own economy where users and their avatars interact socially and use digital assets based in both virtual and actual reality, a place where commerce would presumably be heavily based in decentralized finance, DeFi. No single company or platform would operate the metaverse, but rather, it would be administered by many entities in a decentralized manner (presumably on some open source metaverse OS) and work across multiple computing platforms. At the outset, the metaverse would look like a virtual world featuring enhanced experiences interfaced via VR headsets, mobile devices, gaming consoles and haptic gear that makes you “feel” virtual things. Later, the contours of the metaverse would be shaped by user preferences, monetary opportunities and incremental innovations by developers building on what came before.

In short, the vision is that multiple companies, developers and creators will come together to create one metaverse (as opposed to proprietary, closed platforms) and have it evolve into an embodied mobile internet, one that is open and interoperable and would include many facets of life (i.e., work, social interactions, entertainment) in one hybrid space.

In order for the metaverse to become a reality, that is, successfully link current gaming and communications platforms with other new technologies into a massive new online destination – many obstacles will have to be overcome, even beyond the hardware, software and integration issues. The legal issues stand out, front and center. Indeed, the concept of the metaverse presents a law school final exam’s worth of legal questions to sort out.  Meanwhile, we are still trying to resolve the myriad of legal issues presented by “Web 2.0,” the Internet we know it today. Adding the metaverse to the picture will certainly make things even more complicated.

At the heart of it is the question of what legal underpinnings we need for the metaverse infrastructure – an infrastructure that will allow disparate developers and studios, e-commerce marketplaces, platforms and service providers to all coexist within one virtual world.  To make it even more interesting, it is envisioned to be an interoperable, seamless experience for shoppers, gamers, social media users or just curious internet-goers armed with wallets full of crypto to spend and virtual assets to flaunt.  Currently, we have some well-established web platforms that are closed digital communities and some emerging ones that are open, each with varying business models that will have to be adapted, in some way, to the metaverse. Simply put, the greater the immersive experience and features and interactions, the more complex the related legal issues will be.

Contemplating the metaverse, these are just a few of the legal issues that come to mind:

  • Personal Data, Privacy and Cybersecurity – Privacy and data security lawyers are already challenged with addressing the global concerns presented by varying international approaches to privacy and growing threats to data security. If the metaverse fulfills the hype and develops into a 3D web-based hub for our day-to-day lives, the volume of data that will be collected will be exponentially greater than the reams of data already collected, and the threats to that data will expand as well. Questions to consider will include:
    • Data and privacy – What’s collected? How sensitive is it? Who owns or controls it? The sharing of data will be the cornerstone of a seamless, interoperable environment where users and their digital personas and assets will be usable and tradeable across the different arenas of the metaverse.  How will the collection, sharing and use of such data be regulated?  What laws will govern the collection of data across the metaverse? The laws of a particular state?  Applicable federal privacy laws? The GDPR or other international regulations? Will there be a single overarching “privacy policy” governing the metaverse under a user and merchant agreement, or will there be varying policies depending on which realm of the metaverse you are in? Could some developers create a more “privacy-focused” experience or would the personal data of avatars necessarily flow freely in every realm? How will children’s privacy be handled and will there be “roped off,” adults-only spaces that require further authentication to enter? Will the concepts that we talk about today – “personal information” or “personally identifiable information” – carry over to a world where the scope of available information expands exponentially as activities are tracked across the metaverse?
    • Cybersecurity: How will cybersecurity be managed in the metaverse? What requirements will apply with respect to keeping data secure? How will regulation or site policies evolve to address deep fakes, avatar impersonation, trolling, stolen biometric data, digital wallet hacks and all of the other cyberthreats that we already face today and are likely to be exacerbated in the metaverse? What laws will apply and how will the various players collaborate in addressing this issue?
  • Technology Infrastructure: The metaverse will be a robust computing-intensive experience, highlighting the importance of strong contractual agreements concerning cloud computing, IoT, web hosting, and APIs, as well as software licenses and hardware agreements, and technology service agreements with developers, providers and platform operators involved in the metaverse stack. Performance commitments and service levels will take on heightened importance in light of the real-time interactions that users will expect. What is a meaningful remedy for a service level failure when the metaverse (or a part of the metaverse) freezes? A credit or other traditional remedy?  Lawyers and technologists will have to think creatively to find appropriate and practical approaches to this issue.  And while SaaS and other “as a service” arrangements will grow in importance, perhaps the entire process will spawn MaaS, or “Metaverse as a Service.”
  • Open Source – Open source, already ubiquitous, promises to play a huge role in metaverse development by allowing developers to improve on what has come before. Whether or not the obligations of common open source licenses will be triggered will depend on the technical details of implementation. It is also possible that new open source licenses will be created to contemplate development for the metaverse.
  • Quantum Computing – Quantum computing has dramatically increased the capabilities of computers and is likely to continue to do over the coming years. It will certainly be one of the technologies deployed to provide the computing speed to allow the metaverse to function. However, with the awesome power of quantum computing comes threats to certain legacy protections we use today. Passwords and traditional security protocols may be meaningless (requiring the development of post-quantum cryptography that is secure against both quantum and traditional computers). With raw, unchecked quantum computing power, the metaverse may be subject to manipulation and misuse. Regulation of quantum computing, as applied to the metaverse and elsewhere, may be needed.
  • Antitrust: Collaboration is a key to the success of the metaverse, as it is, by definition, a multi-tenant environment. Of course collaboration amongst competitors may invoke antitrust concerns. Also, to the extent that larger technology companies may be perceived as leveraging their position to assert unfair control in any virtual world, there may be additional concerns.
  • Intellectual Property Issues: A host of IP issues will certainly arise, including infringement, licensing (and breaches thereof), IP protection and anti-piracy efforts, patent issues, joint ownership concerns, safe harbors, potential formation of patent cross-licensing organizations (which also may invoke antitrust concerns), trademark and advertising issues, and entertaining new brand licensing opportunities. The scope of content and technology licenses will have to be delicately negotiated with forethought to the potential breadth of the metaverse (e.g., it’s easy to limit a licensee’s rights based on territory, for example, but what about for a virtual world with no borders or some borders that haven’t been drawn yet?). Rightsholders must also determine their particular tolerance level for unauthorized digital goods or creations. One can envision a need for a DMCA-like safe harbor and takedown process for the metaverse. Also, akin to the litigation that sprouted from the use of athletes’ or celebrities’ likenesses (and their tattoos) in videogames, it’s likely that IP issues and rights of publicity disputes will go way up as people’s virtual avatars take on commercial value in ways that their real human selves never did.
  • Content Moderation. Section 230 of the Communications Decency Act (CDA) has been the target of bipartisan criticism for several years now, yet it remains in effect despite its application in some distasteful ways. How will the CDA be applied to the metaverse, where the exchange of third party content is likely to be even more robust than what we see today on social media?  How will “bad actors” be treated, and what does an account termination look like in the metaverse? Much like the legal issues surrounding offensive content present on today’s social media platforms, and barring a change in the law, the same kinds of issues surrounding user-generated content will persist and the same defenses under Section 230 of the Communications Decency Act will be raised.
  • Blockchain, DAOs, Smart Contract and Digital Assets: Since the metaverse is planned as a single forum with disparate operators and users, the use of a blockchain (or blockchains) would seem to be one solution to act as a trusted, immutable ledger of virtual goods, in-world currencies and identity authentication, particularly when interactions may be somewhat anonymous or between individuals who may or may not trust each other and in the absence of a centralized clearinghouse or administrator for transactions. The use of smart contracts may be pervasive in the metaverse.  Investors or developers may also decide that DAOs (decentralized autonomous organizations) can be useful to crowdsource and fund opportunities within that environment as well.  Overall, a decentralized metaverse with its own discrete economy would feature the creation, sale and holding of sovereign digital assets (and their free use, display and exchange using blockchain-based payment networks within the metaverse). This would presumably give NFTs a role beyond mere digital collectibles and investment opportunities as well as a role for other forms of digital currency (e.g., cryptocurrency, utility tokens, stablecoins, e-money, virtual “in game” money as found in some videogames, or a system of micropayments for virtual goods, services or experiences).  How else will our avatars be able to build a new virtual wardrobe for what is to come?

With this shift to blockchain-based economic structures comes the potential regulatory issues behind digital currencies. How will securities laws view digital assets that retain and form value in the metaverse?  Also, as in life today, visitors to the metaverse must be wary of digital currency schemes and meme coin scams, with regulators not too far behind policing the fraudsters and unlawful actors that will seek opportunities in the metaverse. While regulators and lawmakers are struggling to keep up with the current crop of issues, and despite any progress they may make in that regard, many open issues will remain and new issues will be of concern as digital tokens and currency (and the contracts underlying them) take on new relevance in a virtual world.

Big ideas are always exciting. Watching the metaverse come together is no different, particularly as it all is happening alongside additional innovations surrounding the web, blockchain and cryptocurrency (and, more than likely, updated laws and regulations). However, it’s still early. And we’ll have to see if the current vision of the metaverse will translate into long-term, concrete commercial and civic-minded opportunities for businesses, service providers, developers and individual artists and creators.  Ultimately, these parties will need to sort through many legal issues, both novel and commonplace, before creating and participating in a new virtual world concept that goes beyond the massive multi-user videogame platforms and virtual worlds we have today.

Article By Jeffrey D. Neuburger of Proskauer Rose LLP. Co-authored by  Jonathan Mollod.

For more legal news regarding data privacy and cybersecurity, click here to visit the National Law Review.

© 2021 Proskauer Rose LLP.

Meta Announces the End of Facial Recognition Technology on Facebook

The Facebook company now known as Meta announced this week that it is shutting down the Face Recognition system on Facebook.  Meta stated that this is part of a company-wide move to limit the use of facial recognition technology in its products. What does this mean? If you have a Facebook page and you previously opted-in to be automatically recognized in photos and videos on Facebook, this feature will be disabled. Meta also announced that it is deleting more than a billion people’s individual facial recognition templates.

Meta claims in a press statement released this week that it needs to “weigh the positive use cases for facial recognition against growing societal concerns, especially as regulators have yet to provide clear rules.”  Although Meta doesn’t elaborate on what the details are of the growing societal concerns, the company states that it seeks to move toward narrower forms of personal authentication.

Copyright © 2021 Robinson & Cole LLP. All rights reserved.

For more articles on facial recognition, visit the NLR Communications, Media & Internet section

Legal Implications of Facebook Hearing for Whistleblowers & Employers – Privacy Issues on Many Levels

On Sunday, October 3rd, Facebook whistleblower Frances Haugen publicly revealed her identity on the CBS television show 60 Minutes. Formerly a member of Facebook’s civic misinformation team, she previously reported them to the Securities and Exchange Commission (SEC) for a variety of concerning business practices, including lying to investors and amplifying the January 6th Capitol Hill attack via Facebook’s platform.

Like all instances of whistleblowing, Ms. Haugen’s actions have a considerable array of legal implications — not only for Facebook, but for the technology sectors and for labor practices in general. Especially notable is the fact that Ms. Haugen reportedly signed a confidentiality agreement or sometimes call a non-disclosure agreement (NDA) with Facebook, which may complicate the legal process.

What are the Legal Implications of Breaking a Non-Disclosure Agreement?

After secretly copying thousands of internal documents and memos detailing these practices, Ms. Haugen left Facebook in May, and testified before a Senate subcommittee on October 5th.  By revealing information from the documents she took, Facebook could take legal action against Ms. Haugen if they accuse her of stealing confidential information from them. Ms. Haugen’s actions raise questions of the enforceability of non-disclosure and confidentiality agreements when it comes to filing whistleblower complaints.

“Paradoxically, Big Tech’s attack on whistleblower-insiders is often aimed at the whistleblower’s disclosure of so-called confidential inside information of the company.  Yet, the very concerns expressed by the Facebook whistleblower and others inside Big Tech go to the heart of these same allegations—violations of privacy of the consuming public whose own personal data has been used in a way that puts a target on their backs,” said Renée Brooker, a partner with Tycko & Zavareei LLP, a law firm specializing in representing whistleblowers.

Since Ms. Haugen came forward, Facebook stated they will not be retaliating against her for filing a whistleblower complaint. It is unclear whether protections from legal action extend to other former employees, as is the case with Ms. Haugen.

Other employees like Frances Haugen with information about corporate or governmental misconduct should know that they do not have to quit their jobs to be protected. There are over 100 federal laws that protect whistleblowers – each with its own focus on a particular industry, or a particular whistleblower issue,” said Richard R. Renner of Kalijarvi, Chuzi, Newman & Fitch, PC, a long-time employment lawyer.

According to the Wall Street Journal, Ms. Haugen’s confidentiality agreement permits her to disclose information to regulators, but not to share proprietary information. A tricky balancing act to navigate.

“Big Tech’s attempt to silence whistleblowers are antithetical to the principles that underlie federal laws and federal whistleblower programs that seek to ferret out illegal activity,” Ms. Brooker said. “Those reporting laws include federal and state False Claims Acts, and the SEC Whistleblower Program, which typically feature whistleblower rewards and anti-retaliation provisions.”

Legal Implications for Facebook & Whistleblowers

Large tech organizations like Facebook have an overarching influence on digital information and how it is shared with the public. Whistleblowers like Ms. Haugen expose potential information about how companies accused of harmful practices act against their own consumers, but also risk disclosing proprietary business information which may or may not be harmful to consumers.

Some of the most significant concerns Haugen expressed to Congress were the tip of the iceberg according to those familiar with whistleblowing reports on Big Tech. Aside from the burden of proof required for such releases to Congress, the threats of employer retaliation and legal repercussions may prevent internal concerns from coming to light.

“Facebook should not be singled out as a lone actor. Big Tech needs to be held accountable and insiders can and should be encouraged to come forward and be prepared to back up their allegations with hard evidence sufficient to allow governments to conduct appropriate investigations,’ Ms. Brooker said.

As the concern for cybersecurity and data protection continues to hold public interest, more whistleblower disclosures against Big Tech and other companies could hold them accountable are coming to light.

During Haugen’s testimony during  the October 5, 2021 Congressional hearing revealed a possible expanding definition of media regulation versus consumer censorship. Although these allegations were the latest against a large company such as Facebook, more whistleblowers may continue to come forward with similar accusations, bringing additional implications for privacy, employment law and whistleblower protections.

“The Facebook whistleblower’s revelations have opened the door just a crack on how Big Tech is exploiting American consumers,” Ms. Brooker said.

This article was written by Rachel Popa, Chandler Ford and Jessica Scheck of the National Law Review. To read more articles about privacy, please visit our cybersecurity section.

Supreme Court “Unfriends” Ninth Circuit Decision Applying TCPA to Facebook

In a unanimous decision, the Supreme Court held that Facebook’s “login notification” text messages (sent to users when an attempt is made to access their Facebook account from an unknown device or browser) did not constitute an “automatic telephone dialing system” within the meaning of the federal Telephone Consumer Protection Act (“TCPA”).  In so holding, the Court narrowly construed the statute’s prohibition on automatic telephone dialing systems as applying only to devices that send calls and texts to randomly generated or sequential numbers.  Facebook, Inc. v. Duguid, No. 19-511, slip op. (Apr. 1, 2021).

The TCPA aims to prevent abusive telemarketing practices by restricting communications made through “automatic telephone dialing systems.”  The statute defines autodialers as equipment with the capacity “to store or produce telephone numbers to be called, using a random or sequential number generator,” and to dial those numbers.  Plaintiff alleged Facebook violated the TCPA’s prohibition on autodialers by sending him login notification text messages using equipment that maintained a database of stored phone numbers. Plaintiff alleged Facebook’s system sent automated text messages to the stored numbers each time the associated account was accessed by an unrecognized device or browser.  Facebook moved to dismiss, arguing it did not use an autodialer as defined by the statute because it did not text numbers that were randomly or sequentially generated.  The Ninth Circuit was unpersuaded by Facebook’s reading of the statute, holding that an autodialer need only have the capacity to “store numbers to be called” and “to dial such numbers automatically” to fall within the ambit of the TCPA.

At the heart of the dispute was a question of statutory interpretation: whether the clause “using a random or sequential number generator” (in the phrase “store or produce telephone numbers to be called, using a random or sequential number generator”) modified both “store” and “produce,” or whether it applied only to the closest verb, “produce.”  Applying the series-qualifier canon of interpretation, which instructs that a modifier at the end of a series applies to the entire series, the Court decided the “random or sequential number generator” clause modified both “store” and “produce.”  The Court noted that applying this canon also reflects the most natural reading of the sentence: in a series of nouns or verbs, a modifier at the end of the list normally applies to the entire series.  The Court gave the example of the statement “students must not complete or check any homework to be turned in for a grade, using online homework-help websites.” The Court observed it would be “strange” to read that statement as prohibiting students from completing homework altogether, with or without online support, which would be the outcome if the final modifier did not apply to all the verbs in the series.

Moreover, the Court noted that the statutory context confirmed the autodialer prohibition was intended to apply only to equipment using a random or sequential number generator.  Congress was motivated to enact the TCPA in order to prevent telemarketing robocalls from dialing emergency lines and tying up sequentially numbered lines at a single entity.  Technology like Facebook’s simply did not pose that risk.  The Court noted plaintiff’s interpretation of “autodialer” would, “capture virtually all modern cell phones . . . .  The TCPA’s liability provisions, then, could affect ordinary cell phone owners in the course of commonplace usage, such as speed dialing or sending automated text message responses.”

The Court thus held that a necessary feature of an autodialer under the TCPA is the capacity to use a random or sequential number generator to either store or produce phone numbers to be called.  This decision is expected to considerably decrease the number of class actions that have been brought under the statute.  Watch this space for further developments.

© 2020 Proskauer Rose LLP.


ARTICLE BY Lawrence I Weinstein and
For more articles on the TCPA, visit the NLR Communications, Media & Internet section

Facebook’s Augmented-Reality: Controlling Computer Functions with Your Mind

What if you could control a computer with your mind? Well, Facebook’s latest device may allow you to do just that. Facebook recently announced that it has created a wristband that allows you to move a digital object just by thinking about it. The wristband looks like a large iPod on a strap and uses sensors to detect the user’s movements through electromyography (EMG). EMG interprets electrical activity from motor nerves as information is transmitted from the brain to the hand. An example: you could navigate through the augmented-reality menus by thinking about moving your finger to scroll through the options. However, Facebook notes that this “control”  is coming from the part of the brain that controls motor information, not thought.

The wristband is still in the research-and-development phase at Facebook’s Reality Labs;  no details about its cost or release date have been provided yet. This wristband is part of Facebook’s push for every-day virtual reality and augmented-reality products for consumers, and it’s likely only the beginning.

Facebook also released information earlier this month about its augmented-reality glasses that, as you walk past your favorite coffee shop, might ask you if you want to place an order. Herein lies a privacy dilemma: products such as these glasses and wristband mean that companies like Facebook will have access to even more data points about consumers than they already do. In the coffee shop for example, the company and its advertising partners would know what kind of coffee you prefer, where you live/work/ frequently visit, and either by submission or statistical deduction, also know your demographic, health, and other personal information. A personalized consumer profile based on your every move could easily be created (or more likely added to the already-existing profile about your buying behaviors).

Copyright © 2020 Robinson & Cole LLP. All rights reserved.


For more articles on Facebook, visit the NLR Communications, Media & Internet section.

New U.K. Competition Unit to Focus on Facebook and Google, and Protecting News Publishers

You know your company has tremendous market power when an agency is created just to watch you.

That’s practically what has happened in the U.K. where the Competition and Markets Authority (CMA) has increased oversight of ad-driven digital platforms, namely Facebook and Google, by establishing a dedicated Digital Markets Unit (DMU). While it was created to enforce new laws to govern any platform that dominates their respective market, when the new unit starts operating in April 2021 Facebook and Google will get its full attention.

The CMA says the intention of the unit is to “give consumers more choice and control over their data, help small businesses thrive, and ensure news outlets are not forced out by their bigger rivals.” While acknowledging the “huge benefits” these platforms offer businesses and society, helping people stay in touch and share creative content, and helping companies advertise their services, the CMA noted the growing concern that the concentration of market power among so few companies is hurting growth in the tech sector, reducing innovation and “potentially” having negative effects on their individual and business customers.

The CMA said a new code and the DMU will help ensure that the platforms are not forcing unfair terms on businesses, specifically mentioning “news publishers” and the goal of “helping enhance the sustainability of high-quality online journalism and news publishing.”

The unit will have the power to suspend, block and reverse the companies’ decisions, order them to comply with the law, and fine them.

The devil will be in the details of what the new code will require, and questions remain about what specific conduct the DMU will target and what actions it will take. Will it require the companies to pay license fees to publishers for presenting previews of their content? Will the unit reduce the user data the companies may access, something that would threaten their ad revenue? Will Facebook and Google have to share data with competitors? We will learn more when the code is drafted and when the DMU begins work in April.

Once again a European nation has taken the lead on the global stage to control the downsides of technologies and platforms that have transformed how people communicate and get their news, and how companies reach them to promote their products. With the U.S. deadlocked on so many policy matters, change in the U.S. appears most likely to come as the result of litigation, such as the Department of Justice’s suit against Google, the FTC’s anticipated suit against Facebook, and private antitrust actions brought by companies and individuals.

Edited by Tom Hagy for MoginRubin LLP.

© MoginRubin LLP

ARTICLE BY Mogin Rubin
For more articles on Google, visit the National Law Review  Communications, Media & Internet section,

FTC Reports to Congress on Social Media Bots and Deceptive Advertising

The Federal Trade Commission recently sent a report to Congress on the use of social media bots in online advertising (the “Report”).  The Report summarizes the market for bots, discusses how the use of bots in online advertising might constitute a deceptive practice, and outlines the Commission’s past enforcement work and authority in this area, including cases involving automated programs on social media that mimic the activity of real people.

According to one oft-cited estimate, over 37% of all Internet traffic is not human and is instead the work of bots designed for either good or bad purposes.  Legitimate uses for bots vary: crawler bots collect data for search engine optimization or market analysis; monitoring bots analyze website and system health; aggregator bots gather information and news from different sources; and chatbots simulate human conversation to provide automated customer support.

Social media bots are simply bots that run on social media platforms, where they are common and have a wide variety of uses, just as with bots operating elsewhere.  Often shortened to “social bots,” they are generally described in terms of their ability to emulate and influence humans.

The Department of Homeland Security describes them as programs that “can be used on social media platforms to do various useful and malicious tasks while simulating human behavior.”  These programs use artificial intelligence and big data analytics to imitate legitimate activities.

According to the Report, “good” social media bots – which generally do not pretend to be real people – may provide notice of breaking news, alert people to local emergencies, or encourage civic engagement (such as volunteer opportunities).  Malicious ones, the Report states, may be used for harassment or hate speech, or to distribute malware.  In addition, bot creators may be hijacking legitimate accounts or using real people’s personal information.

The Report states that a recent experiment by the NATO Strategic Communications Centre of Excellence concluded that more than 90% of social media bots are used for commercial purposes, some of which may be benign – like chatbots that facilitate company-to-customer relations – while others are illicit, such as when influencers use them to boost their supposed popularity (which correlates with how much money they can command from advertisers) or when online publishers use them to increase the number of clicks an ad receives (which allows them to earn more commissions from advertisers).

Such misuses generate significant ad revenue.

“Bad” social media bots can also be used to distribute commercial spam containing promotional links and facilitate the spread of fake or deceptive online product reviews.

At present, it is cheap and easy to manipulate social media.  Bots have remained attractive for these reasons and because they are still hard for platforms to detect, are available at different levels of functionality and sophistication, and are financially rewarding to buyers and sellers.

Using social bots to generate likes, comments, or subscribers would generally contradict the terms of service of many social media platforms.  Major social media companies have made commitments to better protect their platforms and networks from manipulation, including the misuse of automated bots.  Those companies have since reported on their actions to remove or disable billions of inauthentic accounts.

The online advertising industry has also taken steps to curb bot and influencer fraud, given the substantial harm it causes to legitimate advertisers.

According to the Report, the computing community is designing sophisticated social bot detection methods.  Nonetheless, malicious use of social media bots remains a serious issue.

In terms of FTC action and authority involving social media bots, the FTC recently announced an enforcement action against a company that sold fake followers, subscribers, views and likes to people trying to artificially inflate their social media presence.

According to the FTC’s complaint, the corporate defendant operated websites on which people bought these fake indicators of influence for their social media accounts.  The corporate defendant allegedly filled over 58,000 orders for fake Twitter followers from buyers who included actors, athletes, motivational speakers, law firm partners and investment professionals.  The company allegedly sold over 4,000 bogus subscribers to operators of YouTube channels and over 32,000 fake views for people who posted individual videos – such as musicians trying to inflate their songs’ popularity.

The corporate defendant also allegedly also sold over 800 orders of fake LinkedIn followers to marketing and public relations firms, financial services and investment companies, and others in the business world.  The FTC’s complaint states that followers, subscribers and other indicators of social media influence “are important metrics that businesses and individuals use in making hiring, investing, purchasing, listening, and viewing decisions.” Put more simply, when considering whether to buy something or use a service, a consumer might look at a person’s or company’s social media.

According to the FTC, a bigger following might impact how the consumer views their legitimacy or the quality of that product or service.  As the complaint also explains, faking these metrics “could induce consumers to make less preferred choices” and “undermine the influencer economy and consumer trust in the information that influencers provide.”

The FTC further states that when a business uses social media bots to mislead the public in this way, it could also harm honest competitors.

The Commission alleged that the corporate defendant violated the FTC Act by providing its customers with the “means and instrumentalities” to commit deceptive acts or practices.  That is, the company’s sale and distribution of fake indicators allowed those customers “to exaggerate and misrepresent their social media influence,” thereby enabling them to deceive potential clients, investors, partners, employees, viewers, and music buyers, among others.  The corporate defendant was therefor charged with violating the FTC Act even though it did not itself make misrepresentations directly to consumers.

The settlement banned the corporate defendant and its owner from selling or assisting others in selling social media influence.  It also prohibits them from misrepresenting or assisting others to misrepresent, the social media influence of any person or entity or in any review or endorsement.  The order imposes a $2.5 million judgment against its owner – the amount he was allegedly paid by the corporate defendant or its parent company.

The aforementioned case is not the first time the FTC has taken action against the commercial misuse of bots or inauthentic online accounts.  Indeed, such actions, while previously involving matters outside the social media context, have been taking place for more than a decade.

For example, the Commission has brought three cases – against Match.com, Ashley Madison, and JDI Dating – involving the use of bots or fake profiles on dating websites.  In all three cases, the FTC alleged in part that the companies or third parties were misrepresenting that communications were from real people when in fact they came from fake profiles.

Further, in 2009, the FTC took action against am alleged rogue Internet service provider that hosted malicious botnets.

All of this enforcement activity demonstrates the ability of the FTC Act to adapt to changing business and consumer behavior as well as to new forms of advertising.

Although technology and business models continue to change, the principles underlying FTC enforcement priorities and cases remain constant.  One such principle lies in the agency’s deception authority.

Under the FTC Act, a claim is deceptive if it is likely to mislead consumers acting reasonably in the circumstances, to their detriment.  A practice is unfair if it causes or is likely to cause substantial consumer injury that consumers cannot reasonably avoid and which is not outweighed by benefits to consumers or competition.

The Commission’s legal authority to counteract the spread of “bad” social media bots is thus powered but also constrained by the FTC Act, pursuant to which the FTC would need to show in any given case that the use of such bots constitute a deceptive or unfair practice in or affecting commerce.

The FTC will continue its monitoring of enforcement opportunities in matters involving advertising on social media as well as the commercial activity of bots on those platforms.

Commissioner Rohit Chopra issued a statement regarding the “viral dissemination of disinformation on social media platforms.” And the “serious harms posed to society.”  “Social media platforms have become a vehicle to sow social divisions within our country through sophisticated disinformation campaigns.  Much of this spread of intentionally false information relies on bots and fake accounts,” Chopra states.

Commissioner Chopra states that “bots and fake accounts contribute to increased engagement by users, and they can also inflate metrics that influence how advertisers spend across various channels.”  “[T]he ad-driven business model on which most platforms rely is based on building detailed dossiers of users.  Platforms may claim that it is difficult to detect bots, but they simultaneously sell advertisers on their ability to precisely target advertising based on extensive data on the lives, behaviors, and tastes of their users … Bots can also benefit platforms by inflating the price of digital advertising.   The price that platforms command for ads is tied closely to user engagement, often measured by the number of impressions.”

Click here to read the Report.


© 2020 Hinch Newman LLP