login-customizer domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home1/natiopq9/public_html/wp-includes/functions.php on line 6131The post Privacy Tip #359 – GoodRx Settles with FTC for Sharing Health Information for Advertising appeared first on The National Law Forum.
]]>According to the press release, the FTC alleged that GoodRx failed “to notify consumers and others of its unauthorized disclosures of consumers’ personal health information to Facebook, Google, and other companies.”
In the proposed federal court order (the Order), GoodRx will be “prohibited from sharing user health data with applicable third parties for advertising purposes.” The complaint alleged that GoodRx told consumers that it would not share personal health information, and it monetized users’ personal health information by sharing consumers’ information with third parties such as Facebook and Instagram to help target users with ads for personalized health and medication-specific ads.
The complaint also alleged that GoodRx “compiled lists of its users who had purchased particular medications such as those used to treat heart disease and blood pressure, and uploaded their email addresses, phone numbers, and mobile advertising IDs to Facebook so it could identify their profiles. GoodRx then used that information to target these users with health-related advertisements.” It also alleges that those third parties then used the information received from GoodRx for their own internal purposes to improve the effectiveness of the advertising.
The proposed Order must be approved by a federal court before it can take effect. To address the FTC’s allegations, the Order prohibits the sharing of health data for ads; requires user consent for any other sharing; stipulates that the company must direct third parties to delete consumer health data; limits the retention of data; and implement a mandated privacy program. Click here to read the press release.
Article By Linn F. Freedman of Robinson & Cole LLP
For more privacy and cybersecurity legal news, click here to visit the National Law Review.
The post Privacy Tip #359 – GoodRx Settles with FTC for Sharing Health Information for Advertising appeared first on The National Law Forum.
]]>The post The Metaverse: A Legal Primer for the Hospitality Industry appeared first on The National Law Forum.
]]>The metaverse, regarded by many as the next frontier in digital commerce, does not, on its surface, appear to offer many benefits to an industry with a core mission of providing a physical space for guests to use and occupy. However, there are many opportunities that the metaverse may offer to owners, operators, licensors, managers, and other participants in the hospitality industry that should not be ignored.
The metaverse is a term used to describe a digital space that allows social interactions, frequently through use of a digital avatar by the user. Built largely using decentralized, blockchain technology instead of centralized servers, the metaverse consists of immersive, three-dimensional experiences, persistent and traceable digital assets, and a strong social component. The metaverse is still in its infancy, so many of the uses for the metaverse remain aspirational; however, metaverse platforms have already seen a great deal of activity and commerce. Meanwhile, technology companies are working to produce the next-generation consumer electronics that they hope will make the metaverse a more common location for commerce.
The hospitality industry may find the metaverse useful in enhancing marketing and guest experiences.
Immersive virtual tours of hotel properties and the surrounding area may allow potential customers to explore all aspects of the property and its surroundings before booking. Operators may also add additional booking options or promotions within the virtual tour to increase exposure to customers.
Creating hybrid, in-person and remote events, such as conferences, weddings, or other celebrations, is also possible through the metaverse. This would allow guests on-site to interact with those who are not physically present at the property for an integrated experience and possible additional revenue streams.
Significantly, numerous outlets have identified the metaverse as one of the top emerging trends in technology. As its popularity grows, the metaverse will become an important location for the hospitality industry to interact with and market to its customer base.
As we move into the future, the metaverse appears poised to provide a tremendous opportunity for the hospitality industry to connect directly with consumers in an interactive way that was until recently considered science fiction. But like every new frontier, technological or otherwise, there are legal and regulatory hurdles to consider and overcome.
Article By Charles B. Ferguson, Jr. and Kimberly A. Wachen of ArentFox Schiff LLP
For more technology legal news, click here to visit the National Law Review.
The post The Metaverse: A Legal Primer for the Hospitality Industry appeared first on The National Law Forum.
]]>The post Texas AG Sues Meta Over Collection and Use of Biometric Data appeared first on The National Law Forum.
]]>On February 14, 2022, Texas Attorney General Ken Paxton brought suit against Meta, the parent company of Facebook and Instagram, over the company’s collection and use of biometric data. The suit alleges that Meta collected and used Texans’ facial geometry data in violation of the Texas Capture or Use of Biometric Identifier Act (“CUBI”) and the Texas Deceptive Trade Practices Act (“DTPA”). The lawsuit is significant because it represents the first time the Texas Attorney General’s Office has brought suit under CUBI.
The suit focuses on Meta’s “tag suggestions” feature, which the company has since retired. The feature scanned faces in users’ photos and videos to suggest “tagging” (i.e., identify by name) users who appeared in the photos and videos. In the complaint, Attorney General Ken Paxton alleged that Meta, collected and analyzed individuals’ facial geometry data (which constitutes biometric data under CUBI) without their consent, shared the data with third parties, and failed to destroy the data in a timely matter, all in violation of CUBI and the DTPA. CUBI regulates the collection and use of biometric data for commercial purposes, and the DTPA prohibits false, misleading, or deceptive acts or practices in the conduct of any trade or commerce.
Among other forms of relief, the complaint seeks an injunction enjoining Meta from violating these laws, a $25,000 civil penalty for each violation of CUBI, and a $10,000 civil penalty for each violation of the DTPA. The suit follows Facebook’s $650 million class-action settlement over alleged violations of Illinois’ Biometric Privacy Act and the company’s discontinuance of the tag suggestions feature last year.
Article By Hunton Andrews Kurth’s Privacy and Cybersecurity Practice Group
For more privacy and cybersecurity legal news, click here to visit the National Law Review.
The post Texas AG Sues Meta Over Collection and Use of Biometric Data appeared first on The National Law Forum.
]]>The post In the Coming ‘Metaverse’, There May Be Excitement but There Certainly Will Be Legal Issues appeared first on The National Law Forum.
]]>The concept of the “metaverse” has garnered much press coverage of late, addressing such topics as the new appetite for metaverse investment opportunities, a recent virtual land boom, or just the promise of it all, where “crypto, gaming and capitalism collide.” The term “metaverse,” which comes from Neal Stephenson’s 1992 science fiction novel “Snow Crash,” is generally used to refer to the development of virtual reality (VR) and augmented reality (AR) technologies, featuring a mashup of massive multiplayer gaming, virtual worlds, virtual workspaces, and remote education to create a decentralized wonderland and collaborative space. The grand concept is that the metaverse will be the next iteration of the mobile internet and a major part of both digital and real life.
Don’t feel like going out tonight in the real world? Why not stay “in” and catch a show or meet people/avatars/smart bots in the metaverse?
As currently conceived, the metaverse, “Web 3.0,” would feature a synchronous environment giving users a seamless experience across different realms, even if such discrete areas of the virtual world are operated by different developers. It would boast its own economy where users and their avatars interact socially and use digital assets based in both virtual and actual reality, a place where commerce would presumably be heavily based in decentralized finance, DeFi. No single company or platform would operate the metaverse, but rather, it would be administered by many entities in a decentralized manner (presumably on some open source metaverse OS) and work across multiple computing platforms. At the outset, the metaverse would look like a virtual world featuring enhanced experiences interfaced via VR headsets, mobile devices, gaming consoles and haptic gear that makes you “feel” virtual things. Later, the contours of the metaverse would be shaped by user preferences, monetary opportunities and incremental innovations by developers building on what came before.
In short, the vision is that multiple companies, developers and creators will come together to create one metaverse (as opposed to proprietary, closed platforms) and have it evolve into an embodied mobile internet, one that is open and interoperable and would include many facets of life (i.e., work, social interactions, entertainment) in one hybrid space.
In order for the metaverse to become a reality, that is, successfully link current gaming and communications platforms with other new technologies into a massive new online destination – many obstacles will have to be overcome, even beyond the hardware, software and integration issues. The legal issues stand out, front and center. Indeed, the concept of the metaverse presents a law school final exam’s worth of legal questions to sort out. Meanwhile, we are still trying to resolve the myriad of legal issues presented by “Web 2.0,” the Internet we know it today. Adding the metaverse to the picture will certainly make things even more complicated.
At the heart of it is the question of what legal underpinnings we need for the metaverse infrastructure – an infrastructure that will allow disparate developers and studios, e-commerce marketplaces, platforms and service providers to all coexist within one virtual world. To make it even more interesting, it is envisioned to be an interoperable, seamless experience for shoppers, gamers, social media users or just curious internet-goers armed with wallets full of crypto to spend and virtual assets to flaunt. Currently, we have some well-established web platforms that are closed digital communities and some emerging ones that are open, each with varying business models that will have to be adapted, in some way, to the metaverse. Simply put, the greater the immersive experience and features and interactions, the more complex the related legal issues will be.
Contemplating the metaverse, these are just a few of the legal issues that come to mind:
With this shift to blockchain-based economic structures comes the potential regulatory issues behind digital currencies. How will securities laws view digital assets that retain and form value in the metaverse? Also, as in life today, visitors to the metaverse must be wary of digital currency schemes and meme coin scams, with regulators not too far behind policing the fraudsters and unlawful actors that will seek opportunities in the metaverse. While regulators and lawmakers are struggling to keep up with the current crop of issues, and despite any progress they may make in that regard, many open issues will remain and new issues will be of concern as digital tokens and currency (and the contracts underlying them) take on new relevance in a virtual world.
Big ideas are always exciting. Watching the metaverse come together is no different, particularly as it all is happening alongside additional innovations surrounding the web, blockchain and cryptocurrency (and, more than likely, updated laws and regulations). However, it’s still early. And we’ll have to see if the current vision of the metaverse will translate into long-term, concrete commercial and civic-minded opportunities for businesses, service providers, developers and individual artists and creators. Ultimately, these parties will need to sort through many legal issues, both novel and commonplace, before creating and participating in a new virtual world concept that goes beyond the massive multi-user videogame platforms and virtual worlds we have today.
Article By Jeffrey D. Neuburger of Proskauer Rose LLP. Co-authored by Jonathan Mollod.
For more legal news regarding data privacy and cybersecurity, click here to visit the National Law Review.
The post In the Coming ‘Metaverse’, There May Be Excitement but There Certainly Will Be Legal Issues appeared first on The National Law Forum.
]]>The post Meta Announces the End of Facial Recognition Technology on Facebook appeared first on The National Law Forum.
]]>The Facebook company now known as Meta announced this week that it is shutting down the Face Recognition system on Facebook. Meta stated that this is part of a company-wide move to limit the use of facial recognition technology in its products. What does this mean? If you have a Facebook page and you previously opted-in to be automatically recognized in photos and videos on Facebook, this feature will be disabled. Meta also announced that it is deleting more than a billion people’s individual facial recognition templates.
Meta claims in a press statement released this week that it needs to “weigh the positive use cases for facial recognition against growing societal concerns, especially as regulators have yet to provide clear rules.” Although Meta doesn’t elaborate on what the details are of the growing societal concerns, the company states that it seeks to move toward narrower forms of personal authentication.
Article By Deborah George of Robinson & Cole LLP
For more articles on facial recognition, visit the NLR Communications, Media & Internet section
The post Meta Announces the End of Facial Recognition Technology on Facebook appeared first on The National Law Forum.
]]>The post Legal Implications of Facebook Hearing for Whistleblowers & Employers – Privacy Issues on Many Levels appeared first on The National Law Forum.
]]>Like all instances of whistleblowing, Ms. Haugen’s actions have a considerable array of legal implications — not only for Facebook, but for the technology sectors and for labor practices in general. Especially notable is the fact that Ms. Haugen reportedly signed a confidentiality agreement or sometimes call a non-disclosure agreement (NDA) with Facebook, which may complicate the legal process.
After secretly copying thousands of internal documents and memos detailing these practices, Ms. Haugen left Facebook in May, and testified before a Senate subcommittee on October 5th. By revealing information from the documents she took, Facebook could take legal action against Ms. Haugen if they accuse her of stealing confidential information from them. Ms. Haugen’s actions raise questions of the enforceability of non-disclosure and confidentiality agreements when it comes to filing whistleblower complaints.
“Paradoxically, Big Tech’s attack on whistleblower-insiders is often aimed at the whistleblower’s disclosure of so-called confidential inside information of the company. Yet, the very concerns expressed by the Facebook whistleblower and others inside Big Tech go to the heart of these same allegations—violations of privacy of the consuming public whose own personal data has been used in a way that puts a target on their backs,” said Renée Brooker, a partner with Tycko & Zavareei LLP, a law firm specializing in representing whistleblowers.
Since Ms. Haugen came forward, Facebook stated they will not be retaliating against her for filing a whistleblower complaint. It is unclear whether protections from legal action extend to other former employees, as is the case with Ms. Haugen.
“Other employees like Frances Haugen with information about corporate or governmental misconduct should know that they do not have to quit their jobs to be protected. There are over 100 federal laws that protect whistleblowers – each with its own focus on a particular industry, or a particular whistleblower issue,” said Richard R. Renner of Kalijarvi, Chuzi, Newman & Fitch, PC, a long-time employment lawyer.
According to the Wall Street Journal, Ms. Haugen’s confidentiality agreement permits her to disclose information to regulators, but not to share proprietary information. A tricky balancing act to navigate.
“Big Tech’s attempt to silence whistleblowers are antithetical to the principles that underlie federal laws and federal whistleblower programs that seek to ferret out illegal activity,” Ms. Brooker said. “Those reporting laws include federal and state False Claims Acts, and the SEC Whistleblower Program, which typically feature whistleblower rewards and anti-retaliation provisions.”
Large tech organizations like Facebook have an overarching influence on digital information and how it is shared with the public. Whistleblowers like Ms. Haugen expose potential information about how companies accused of harmful practices act against their own consumers, but also risk disclosing proprietary business information which may or may not be harmful to consumers.
Some of the most significant concerns Haugen expressed to Congress were the tip of the iceberg according to those familiar with whistleblowing reports on Big Tech. Aside from the burden of proof required for such releases to Congress, the threats of employer retaliation and legal repercussions may prevent internal concerns from coming to light.
“Facebook should not be singled out as a lone actor. Big Tech needs to be held accountable and insiders can and should be encouraged to come forward and be prepared to back up their allegations with hard evidence sufficient to allow governments to conduct appropriate investigations,’ Ms. Brooker said.
As the concern for cybersecurity and data protection continues to hold public interest, more whistleblower disclosures against Big Tech and other companies could hold them accountable are coming to light.
During Haugen’s testimony during the October 5, 2021 Congressional hearing revealed a possible expanding definition of media regulation versus consumer censorship. Although these allegations were the latest against a large company such as Facebook, more whistleblowers may continue to come forward with similar accusations, bringing additional implications for privacy, employment law and whistleblower protections.
“The Facebook whistleblower’s revelations have opened the door just a crack on how Big Tech is exploiting American consumers,” Ms. Brooker said.
This article was written by Rachel Popa, Chandler Ford and Jessica Scheck of the National Law Review. To read more articles about privacy, please visit our cybersecurity section.
The post Legal Implications of Facebook Hearing for Whistleblowers & Employers – Privacy Issues on Many Levels appeared first on The National Law Forum.
]]>The post Supreme Court “Unfriends” Ninth Circuit Decision Applying TCPA to Facebook appeared first on The National Law Forum.
]]>The TCPA aims to prevent abusive telemarketing practices by restricting communications made through “automatic telephone dialing systems.” The statute defines autodialers as equipment with the capacity “to store or produce telephone numbers to be called, using a random or sequential number generator,” and to dial those numbers. Plaintiff alleged Facebook violated the TCPA’s prohibition on autodialers by sending him login notification text messages using equipment that maintained a database of stored phone numbers. Plaintiff alleged Facebook’s system sent automated text messages to the stored numbers each time the associated account was accessed by an unrecognized device or browser. Facebook moved to dismiss, arguing it did not use an autodialer as defined by the statute because it did not text numbers that were randomly or sequentially generated. The Ninth Circuit was unpersuaded by Facebook’s reading of the statute, holding that an autodialer need only have the capacity to “store numbers to be called” and “to dial such numbers automatically” to fall within the ambit of the TCPA.
At the heart of the dispute was a question of statutory interpretation: whether the clause “using a random or sequential number generator” (in the phrase “store or produce telephone numbers to be called, using a random or sequential number generator”) modified both “store” and “produce,” or whether it applied only to the closest verb, “produce.” Applying the series-qualifier canon of interpretation, which instructs that a modifier at the end of a series applies to the entire series, the Court decided the “random or sequential number generator” clause modified both “store” and “produce.” The Court noted that applying this canon also reflects the most natural reading of the sentence: in a series of nouns or verbs, a modifier at the end of the list normally applies to the entire series. The Court gave the example of the statement “students must not complete or check any homework to be turned in for a grade, using online homework-help websites.” The Court observed it would be “strange” to read that statement as prohibiting students from completing homework altogether, with or without online support, which would be the outcome if the final modifier did not apply to all the verbs in the series.
Moreover, the Court noted that the statutory context confirmed the autodialer prohibition was intended to apply only to equipment using a random or sequential number generator. Congress was motivated to enact the TCPA in order to prevent telemarketing robocalls from dialing emergency lines and tying up sequentially numbered lines at a single entity. Technology like Facebook’s simply did not pose that risk. The Court noted plaintiff’s interpretation of “autodialer” would, “capture virtually all modern cell phones . . . . The TCPA’s liability provisions, then, could affect ordinary cell phone owners in the course of commonplace usage, such as speed dialing or sending automated text message responses.”
The Court thus held that a necessary feature of an autodialer under the TCPA is the capacity to use a random or sequential number generator to either store or produce phone numbers to be called. This decision is expected to considerably decrease the number of class actions that have been brought under the statute. Watch this space for further developments.
© 2020 Proskauer Rose LLP.
The post Supreme Court “Unfriends” Ninth Circuit Decision Applying TCPA to Facebook appeared first on The National Law Forum.
]]>The post Facebook’s Augmented-Reality: Controlling Computer Functions with Your Mind appeared first on The National Law Forum.
]]>The wristband is still in the research-and-development phase at Facebook’s Reality Labs; no details about its cost or release date have been provided yet. This wristband is part of Facebook’s push for every-day virtual reality and augmented-reality products for consumers, and it’s likely only the beginning.
Facebook also released information earlier this month about its augmented-reality glasses that, as you walk past your favorite coffee shop, might ask you if you want to place an order. Herein lies a privacy dilemma: products such as these glasses and wristband mean that companies like Facebook will have access to even more data points about consumers than they already do. In the coffee shop for example, the company and its advertising partners would know what kind of coffee you prefer, where you live/work/ frequently visit, and either by submission or statistical deduction, also know your demographic, health, and other personal information. A personalized consumer profile based on your every move could easily be created (or more likely added to the already-existing profile about your buying behaviors).
Copyright © 2020 Robinson & Cole LLP. All rights reserved.
The post Facebook’s Augmented-Reality: Controlling Computer Functions with Your Mind appeared first on The National Law Forum.
]]>The post New U.K. Competition Unit to Focus on Facebook and Google, and Protecting News Publishers appeared first on The National Law Forum.
]]>That’s practically what has happened in the U.K. where the Competition and Markets Authority (CMA) has increased oversight of ad-driven digital platforms, namely Facebook and Google, by establishing a dedicated Digital Markets Unit (DMU). While it was created to enforce new laws to govern any platform that dominates their respective market, when the new unit starts operating in April 2021 Facebook and Google will get its full attention.
The CMA says the intention of the unit is to “give consumers more choice and control over their data, help small businesses thrive, and ensure news outlets are not forced out by their bigger rivals.” While acknowledging the “huge benefits” these platforms offer businesses and society, helping people stay in touch and share creative content, and helping companies advertise their services, the CMA noted the growing concern that the concentration of market power among so few companies is hurting growth in the tech sector, reducing innovation and “potentially” having negative effects on their individual and business customers.
The CMA said a new code and the DMU will help ensure that the platforms are not forcing unfair terms on businesses, specifically mentioning “news publishers” and the goal of “helping enhance the sustainability of high-quality online journalism and news publishing.”
The unit will have the power to suspend, block and reverse the companies’ decisions, order them to comply with the law, and fine them.
The devil will be in the details of what the new code will require, and questions remain about what specific conduct the DMU will target and what actions it will take. Will it require the companies to pay license fees to publishers for presenting previews of their content? Will the unit reduce the user data the companies may access, something that would threaten their ad revenue? Will Facebook and Google have to share data with competitors? We will learn more when the code is drafted and when the DMU begins work in April.
Once again a European nation has taken the lead on the global stage to control the downsides of technologies and platforms that have transformed how people communicate and get their news, and how companies reach them to promote their products. With the U.S. deadlocked on so many policy matters, change in the U.S. appears most likely to come as the result of litigation, such as the Department of Justice’s suit against Google, the FTC’s anticipated suit against Facebook, and private antitrust actions brought by companies and individuals.
Edited by Tom Hagy for MoginRubin LLP.
The post New U.K. Competition Unit to Focus on Facebook and Google, and Protecting News Publishers appeared first on The National Law Forum.
]]>The post FTC Reports to Congress on Social Media Bots and Deceptive Advertising appeared first on The National Law Forum.
]]>The Federal Trade Commission recently sent a report to Congress on the use of social media bots in online advertising (the “Report”). The Report summarizes the market for bots, discusses how the use of bots in online advertising might constitute a deceptive practice, and outlines the Commission’s past enforcement work and authority in this area, including cases involving automated programs on social media that mimic the activity of real people.
According to one oft-cited estimate, over 37% of all Internet traffic is not human and is instead the work of bots designed for either good or bad purposes. Legitimate uses for bots vary: crawler bots collect data for search engine optimization or market analysis; monitoring bots analyze website and system health; aggregator bots gather information and news from different sources; and chatbots simulate human conversation to provide automated customer support.
Social media bots are simply bots that run on social media platforms, where they are common and have a wide variety of uses, just as with bots operating elsewhere. Often shortened to “social bots,” they are generally described in terms of their ability to emulate and influence humans.
The Department of Homeland Security describes them as programs that “can be used on social media platforms to do various useful and malicious tasks while simulating human behavior.” These programs use artificial intelligence and big data analytics to imitate legitimate activities.
According to the Report, “good” social media bots – which generally do not pretend to be real people – may provide notice of breaking news, alert people to local emergencies, or encourage civic engagement (such as volunteer opportunities). Malicious ones, the Report states, may be used for harassment or hate speech, or to distribute malware. In addition, bot creators may be hijacking legitimate accounts or using real people’s personal information.
The Report states that a recent experiment by the NATO Strategic Communications Centre of Excellence concluded that more than 90% of social media bots are used for commercial purposes, some of which may be benign – like chatbots that facilitate company-to-customer relations – while others are illicit, such as when influencers use them to boost their supposed popularity (which correlates with how much money they can command from advertisers) or when online publishers use them to increase the number of clicks an ad receives (which allows them to earn more commissions from advertisers).
Such misuses generate significant ad revenue.
“Bad” social media bots can also be used to distribute commercial spam containing promotional links and facilitate the spread of fake or deceptive online product reviews.
At present, it is cheap and easy to manipulate social media. Bots have remained attractive for these reasons and because they are still hard for platforms to detect, are available at different levels of functionality and sophistication, and are financially rewarding to buyers and sellers.
Using social bots to generate likes, comments, or subscribers would generally contradict the terms of service of many social media platforms. Major social media companies have made commitments to better protect their platforms and networks from manipulation, including the misuse of automated bots. Those companies have since reported on their actions to remove or disable billions of inauthentic accounts.
The online advertising industry has also taken steps to curb bot and influencer fraud, given the substantial harm it causes to legitimate advertisers.
According to the Report, the computing community is designing sophisticated social bot detection methods. Nonetheless, malicious use of social media bots remains a serious issue.
In terms of FTC action and authority involving social media bots, the FTC recently announced an enforcement action against a company that sold fake followers, subscribers, views and likes to people trying to artificially inflate their social media presence.
According to the FTC’s complaint, the corporate defendant operated websites on which people bought these fake indicators of influence for their social media accounts. The corporate defendant allegedly filled over 58,000 orders for fake Twitter followers from buyers who included actors, athletes, motivational speakers, law firm partners and investment professionals. The company allegedly sold over 4,000 bogus subscribers to operators of YouTube channels and over 32,000 fake views for people who posted individual videos – such as musicians trying to inflate their songs’ popularity.
The corporate defendant also allegedly also sold over 800 orders of fake LinkedIn followers to marketing and public relations firms, financial services and investment companies, and others in the business world. The FTC’s complaint states that followers, subscribers and other indicators of social media influence “are important metrics that businesses and individuals use in making hiring, investing, purchasing, listening, and viewing decisions.” Put more simply, when considering whether to buy something or use a service, a consumer might look at a person’s or company’s social media.
According to the FTC, a bigger following might impact how the consumer views their legitimacy or the quality of that product or service. As the complaint also explains, faking these metrics “could induce consumers to make less preferred choices” and “undermine the influencer economy and consumer trust in the information that influencers provide.”
The FTC further states that when a business uses social media bots to mislead the public in this way, it could also harm honest competitors.
The Commission alleged that the corporate defendant violated the FTC Act by providing its customers with the “means and instrumentalities” to commit deceptive acts or practices. That is, the company’s sale and distribution of fake indicators allowed those customers “to exaggerate and misrepresent their social media influence,” thereby enabling them to deceive potential clients, investors, partners, employees, viewers, and music buyers, among others. The corporate defendant was therefor charged with violating the FTC Act even though it did not itself make misrepresentations directly to consumers.
The settlement banned the corporate defendant and its owner from selling or assisting others in selling social media influence. It also prohibits them from misrepresenting or assisting others to misrepresent, the social media influence of any person or entity or in any review or endorsement. The order imposes a $2.5 million judgment against its owner – the amount he was allegedly paid by the corporate defendant or its parent company.
The aforementioned case is not the first time the FTC has taken action against the commercial misuse of bots or inauthentic online accounts. Indeed, such actions, while previously involving matters outside the social media context, have been taking place for more than a decade.
For example, the Commission has brought three cases – against Match.com, Ashley Madison, and JDI Dating – involving the use of bots or fake profiles on dating websites. In all three cases, the FTC alleged in part that the companies or third parties were misrepresenting that communications were from real people when in fact they came from fake profiles.
Further, in 2009, the FTC took action against am alleged rogue Internet service provider that hosted malicious botnets.
All of this enforcement activity demonstrates the ability of the FTC Act to adapt to changing business and consumer behavior as well as to new forms of advertising.
Although technology and business models continue to change, the principles underlying FTC enforcement priorities and cases remain constant. One such principle lies in the agency’s deception authority.
Under the FTC Act, a claim is deceptive if it is likely to mislead consumers acting reasonably in the circumstances, to their detriment. A practice is unfair if it causes or is likely to cause substantial consumer injury that consumers cannot reasonably avoid and which is not outweighed by benefits to consumers or competition.
The Commission’s legal authority to counteract the spread of “bad” social media bots is thus powered but also constrained by the FTC Act, pursuant to which the FTC would need to show in any given case that the use of such bots constitute a deceptive or unfair practice in or affecting commerce.
The FTC will continue its monitoring of enforcement opportunities in matters involving advertising on social media as well as the commercial activity of bots on those platforms.
Commissioner Rohit Chopra issued a statement regarding the “viral dissemination of disinformation on social media platforms.” And the “serious harms posed to society.” “Social media platforms have become a vehicle to sow social divisions within our country through sophisticated disinformation campaigns. Much of this spread of intentionally false information relies on bots and fake accounts,” Chopra states.
Commissioner Chopra states that “bots and fake accounts contribute to increased engagement by users, and they can also inflate metrics that influence how advertisers spend across various channels.” “[T]he ad-driven business model on which most platforms rely is based on building detailed dossiers of users. Platforms may claim that it is difficult to detect bots, but they simultaneously sell advertisers on their ability to precisely target advertising based on extensive data on the lives, behaviors, and tastes of their users … Bots can also benefit platforms by inflating the price of digital advertising. The price that platforms command for ads is tied closely to user engagement, often measured by the number of impressions.”
Click here to read the Report.
© 2020 Hinch Newman LLP
The post FTC Reports to Congress on Social Media Bots and Deceptive Advertising appeared first on The National Law Forum.
]]>