The Federal Trade Commission recently sent a report to Congress on the use of social media bots in online advertising (the “Report”). The Report summarizes the market for bots, discusses how the use of bots in online advertising might constitute a deceptive practice, and outlines the Commission’s past enforcement work and authority in this area, including cases involving automated programs on social media that mimic the activity of real people.
According to one oft-cited estimate, over 37% of all Internet traffic is not human and is instead the work of bots designed for either good or bad purposes. Legitimate uses for bots vary: crawler bots collect data for search engine optimization or market analysis; monitoring bots analyze website and system health; aggregator bots gather information and news from different sources; and chatbots simulate human conversation to provide automated customer support.
Social media bots are simply bots that run on social media platforms, where they are common and have a wide variety of uses, just as with bots operating elsewhere. Often shortened to “social bots,” they are generally described in terms of their ability to emulate and influence humans.
The Department of Homeland Security describes them as programs that “can be used on social media platforms to do various useful and malicious tasks while simulating human behavior.” These programs use artificial intelligence and big data analytics to imitate legitimate activities.
According to the Report, “good” social media bots – which generally do not pretend to be real people – may provide notice of breaking news, alert people to local emergencies, or encourage civic engagement (such as volunteer opportunities). Malicious ones, the Report states, may be used for harassment or hate speech, or to distribute malware. In addition, bot creators may be hijacking legitimate accounts or using real people’s personal information.
The Report states that a recent experiment by the NATO Strategic Communications Centre of Excellence concluded that more than 90% of social media bots are used for commercial purposes, some of which may be benign – like chatbots that facilitate company-to-customer relations – while others are illicit, such as when influencers use them to boost their supposed popularity (which correlates with how much money they can command from advertisers) or when online publishers use them to increase the number of clicks an ad receives (which allows them to earn more commissions from advertisers).
Such misuses generate significant ad revenue.
“Bad” social media bots can also be used to distribute commercial spam containing promotional links and facilitate the spread of fake or deceptive online product reviews.
At present, it is cheap and easy to manipulate social media. Bots have remained attractive for these reasons and because they are still hard for platforms to detect, are available at different levels of functionality and sophistication, and are financially rewarding to buyers and sellers.
Using social bots to generate likes, comments, or subscribers would generally contradict the terms of service of many social media platforms. Major social media companies have made commitments to better protect their platforms and networks from manipulation, including the misuse of automated bots. Those companies have since reported on their actions to remove or disable billions of inauthentic accounts.
The online advertising industry has also taken steps to curb bot and influencer fraud, given the substantial harm it causes to legitimate advertisers.
According to the Report, the computing community is designing sophisticated social bot detection methods. Nonetheless, malicious use of social media bots remains a serious issue.
In terms of FTC action and authority involving social media bots, the FTC recently announced an enforcement action against a company that sold fake followers, subscribers, views and likes to people trying to artificially inflate their social media presence.
According to the FTC’s complaint, the corporate defendant operated websites on which people bought these fake indicators of influence for their social media accounts. The corporate defendant allegedly filled over 58,000 orders for fake Twitter followers from buyers who included actors, athletes, motivational speakers, law firm partners and investment professionals. The company allegedly sold over 4,000 bogus subscribers to operators of YouTube channels and over 32,000 fake views for people who posted individual videos – such as musicians trying to inflate their songs’ popularity.
The corporate defendant also allegedly also sold over 800 orders of fake LinkedIn followers to marketing and public relations firms, financial services and investment companies, and others in the business world. The FTC’s complaint states that followers, subscribers and other indicators of social media influence “are important metrics that businesses and individuals use in making hiring, investing, purchasing, listening, and viewing decisions.” Put more simply, when considering whether to buy something or use a service, a consumer might look at a person’s or company’s social media.
According to the FTC, a bigger following might impact how the consumer views their legitimacy or the quality of that product or service. As the complaint also explains, faking these metrics “could induce consumers to make less preferred choices” and “undermine the influencer economy and consumer trust in the information that influencers provide.”
The FTC further states that when a business uses social media bots to mislead the public in this way, it could also harm honest competitors.
The Commission alleged that the corporate defendant violated the FTC Act by providing its customers with the “means and instrumentalities” to commit deceptive acts or practices. That is, the company’s sale and distribution of fake indicators allowed those customers “to exaggerate and misrepresent their social media influence,” thereby enabling them to deceive potential clients, investors, partners, employees, viewers, and music buyers, among others. The corporate defendant was therefor charged with violating the FTC Act even though it did not itself make misrepresentations directly to consumers.
The settlement banned the corporate defendant and its owner from selling or assisting others in selling social media influence. It also prohibits them from misrepresenting or assisting others to misrepresent, the social media influence of any person or entity or in any review or endorsement. The order imposes a $2.5 million judgment against its owner – the amount he was allegedly paid by the corporate defendant or its parent company.
The aforementioned case is not the first time the FTC has taken action against the commercial misuse of bots or inauthentic online accounts. Indeed, such actions, while previously involving matters outside the social media context, have been taking place for more than a decade.
For example, the Commission has brought three cases – against Match.com, Ashley Madison, and JDI Dating – involving the use of bots or fake profiles on dating websites. In all three cases, the FTC alleged in part that the companies or third parties were misrepresenting that communications were from real people when in fact they came from fake profiles.
Further, in 2009, the FTC took action against am alleged rogue Internet service provider that hosted malicious botnets.
All of this enforcement activity demonstrates the ability of the FTC Act to adapt to changing business and consumer behavior as well as to new forms of advertising.
Although technology and business models continue to change, the principles underlying FTC enforcement priorities and cases remain constant. One such principle lies in the agency’s deception authority.
Under the FTC Act, a claim is deceptive if it is likely to mislead consumers acting reasonably in the circumstances, to their detriment. A practice is unfair if it causes or is likely to cause substantial consumer injury that consumers cannot reasonably avoid and which is not outweighed by benefits to consumers or competition.
The Commission’s legal authority to counteract the spread of “bad” social media bots is thus powered but also constrained by the FTC Act, pursuant to which the FTC would need to show in any given case that the use of such bots constitute a deceptive or unfair practice in or affecting commerce.
The FTC will continue its monitoring of enforcement opportunities in matters involving advertising on social media as well as the commercial activity of bots on those platforms.
Commissioner Rohit Chopra issued a statement regarding the “viral dissemination of disinformation on social media platforms.” And the “serious harms posed to society.” “Social media platforms have become a vehicle to sow social divisions within our country through sophisticated disinformation campaigns. Much of this spread of intentionally false information relies on bots and fake accounts,” Chopra states.
Commissioner Chopra states that “bots and fake accounts contribute to increased engagement by users, and they can also inflate metrics that influence how advertisers spend across various channels.” “[T]he ad-driven business model on which most platforms rely is based on building detailed dossiers of users. Platforms may claim that it is difficult to detect bots, but they simultaneously sell advertisers on their ability to precisely target advertising based on extensive data on the lives, behaviors, and tastes of their users … Bots can also benefit platforms by inflating the price of digital advertising. The price that platforms command for ads is tied closely to user engagement, often measured by the number of impressions.”
Click here to read the Report.