National Security Meets Teenage Dance Battles: Trump Issues Executive Orders Impacting TikTok and WeChat Business in the U.S.

On August 6, 2020, Trump issued two separate executive orders that will severely restrict TikTok and WeChat’s business in the United States.  For weeks, the media has reported on Trump’s desire to “ban” TikTok with speculation about the legal authority to do so.  We break down the impact of the Orders below.

The White House has been threatening for weeks to ban both apps in the interest of protecting “the national security, foreign policy, and economy of the United States.”  According to the Orders issued Thursday, the data collection practices of both entities purportedly “threaten[] to allow the Chinese Communist Party access to Americans’ personal and proprietary information — potentially allowing China to track the locations of Federal employees and contractors, build dossiers of personal information for blackmail, and conduct corporate espionage.”

This is not a new threat.  A variety of government actions in recent years have been aimed at mitigating the national security risks associated with foreign adversaries stealing sensitive data of U.S. persons.  For example, in 2018, the Foreign Investment Risk Review Modernization Act (FIRRMA) was implemented to expand the authority of the Committee on Foreign Investment in the United States (CFIUS) to review and address national security concerns arising from foreign investment in U.S. companies, particularly where foreign parties can access the personal data of U.S. citizens.  And CFIUS has not been hesitant about exercising this authority.  Last year, CFIUS required the divestment of a Chinese investor’s stake in Grindr, the popular gay dating app, because of concerns that the Chinese investor would have access to U.S. citizens’ sensitive information which could be used for blackmail or other nefarious purposes.  That action was in the face of Grindr’s impending IPO.

In May 2019, Trump took one step further, issuing Executive Order 13873 to address a “national emergency with respect to the information and communications technology and services supply chain.”  That Order stated that foreign adversaries were taking advantage of vulnerabilities in American IT and communications services supply chain and described broad measures to address that threat.  According to these new Orders, further action is necessary to address these threats.  EO 13873 and the TikTok and WeChat Orders were all issued under the International Emergency Economic Powers Act  (IEEPA), which provides the President broad authority to regulate transactions which threaten national security during a national emergency.

Order Highlights

Both Executive Orders provide the Secretary of Commerce broad authority to prohibit transactions involving the parent companies of TikTok and WeChat, with limitations on which transactions yet to be defined.

  • The TikTok EO prohibits “any transaction by any person, or with respect to any property, subject to the jurisdiction of the United States,” with ByteDance Ltd., TikTok’s parent company, “or its subsidiaries, in which any such company has any interest, as identified by the Secretary of Commerce”
  • The WeChat EO prohibits “any transaction that is related to WeChat by any person, or with respect to any property, subject to the jurisdiction of the United States, with Tencent Holdings Ltd., WeChat’s parent company “or any subsidiary of that entity, as identified by the Secretary of Commerce.”
  • Both Executive Orders will take effect 45 days after issuance of the order (September 20, 2020), by which time the Secretary of Commerce will have identified the transactions subject to the Orders.

Implications

Until the Secretary of Commerce identifies the scope of transactions prohibited by the Executive Orders, the ultimate ramifications of these Orders remain unclear.  However, given what we do know, we have some initial thoughts on how these new prohibitions may play out.  The following are some preliminary answers to the burning questions at the forefront of every American teenager’s (and business person’s) mind.

Q:  Do these Orders ban the use of TikTok or WeChat in the United States?

A:  While the Orders do not necessarily ban the use of TikTok or WeChat itself, the app (or any future software updates) may no longer be available for download in the Google or Apple app stores in the U.S., and U.S. companies may not be able to purchase advertising on the social media platform – effectively (if not explicitly) banning the apps from the United States.

Q:  Will all transactions with ByteDance Ltd. and Tencent Holdings Ltd. (TikTok and WeChat’s parent companies, respectively) be prohibited?

A:  Given the broad language in the Orders, it does appear that U.S. app stores, carriers, or internet service providers (ISPs) will likely not be able to continue carrying the services while TikTok and WeChat are owned by these Chinese entities.  However, it is unlikely that the goal is to prohibit all transactions with these companies as a deterrent or punishment tool – which would essentially amount to designating them as Specially Designated Nationals (SDNs) – the  Orders clearly contemplate some limitations to be imposed on the types of transactions subject to the Order by the Secretary of Commerce.  Furthermore, the national security policy rationale for such restrictions will not be present in all transactions (i.e. if the concern is the ability of Chinese entities to access personal data of U.S. citizens in a manner that could be used against the interests of the United States, then presumably transactions in which ByteDance Ltd. and Tencent Holdings Ltd. do not have access to such data should be permissible.).  So while we do not know exactly what the scope of prohibited transactions will be, it would appear that the goal is to restrict these entities’ access to U.S. data and any transactions that would facilitate or allow such access.

Q:  What does “any property, subject to the jurisdiction of the United States” mean?

A:  Normally, the idea behind such language is to limit the prohibited transactions to those with a clear nexus to the United States: any U.S. person or person within the United States, or involving property within the United States.  It is unlikely that transactions conducted wholly outside the United States by non-U.S. entities would be impacted.  From a policy perspective, it would make sense that the prohibitions be limited to transactions that would facilitate these Chinese entities getting access to U.S.-person data through the use of TikTok and WeChat.

Q:  What about the reported sale of TikTok?

A: There is a chance the restrictions outlined in the TikTok EO will become moot.  Reportedly, Microsoft is in talks with ByteDance to acquire TikTok’s business in the United States and a few other jurisdictions.  If the scope of prohibited transactions are tailored to those involving access to U.S. person data and if a U.S. company can assure that U.S. user-data will be protected, then the national security concerns of continued use of the app would be mitigated.  Unless and until such acquisition takes place, U.S. companies investing in TikTok or utilizing it for advertising such be prepared for the restrictions to take effect.  At this time, there do not appear to be any U.S. buyers in the mix for WeChat.

Q:  The WeChat EO prohibits any transaction that is “related to” WeChat…what does that mean?

A:  The WeChat prohibition is more ambiguous and could have significantly wider impact on U.S. business interests. WeChat is widely used in the United States, particularly by people of Chinese descent, to carry out business transactions, including communicating with, and making mobile payments to, various service providers.  The WeChat EO prohibits “any transaction that is related to WeChat  with Tencent Holdings Ltd., or any of its subsidiaries.  Unlike TikTok, WeChat’s services extend beyond social media.  While the language of the ban is vague and the prohibited transactions are yet to be determined, it appears likely that using WeChat for these communications and transactions may no longer be legal. It is also unclear if the WeChat prohibition will extend to other businesses tied to Tencent, WeChat’s parent company, including major gaming companies Epic Games (publisher of the popular “Fortnite”), Riot Games (“League of Legends”), and Activision Blizzard, all in which Tencent has substantial ownership interests.  There has been some reporting that a White House official confirmed Tencent’s gaming interest are excluded from the Order as being unrelated to WeChat, but until the Secretary of Commerce specifies the prohibited transactions, the scope of the Order remains uncertain

Bottom Line

Until the Secretary of Commerce issues its list of transactions prohibited under these Executive Orders, the scope and effect of these Orders is conjectural.  This Administration’s all-in posture towards China would suggest that the prohibitions could be broad and severe.  U.S. companies utilizing WeChat or TikTok for business purposes or conducting business with the apps’ owners, should think carefully about ongoing and future transactions.  Of course, there is an election right around the corner and a new Administration may bring significant change to related foreign, trade and technology policy.  Thoughtful planning for a variety of scenarios will enable companies’ to respond appropriately as the restrictions on TikTok and WeChat are crystallized.


Copyright © 2020, Sheppard Mullin Richter & Hampton LLP.

FTC Reports to Congress on Social Media Bots and Deceptive Advertising

The Federal Trade Commission recently sent a report to Congress on the use of social media bots in online advertising (the “Report”).  The Report summarizes the market for bots, discusses how the use of bots in online advertising might constitute a deceptive practice, and outlines the Commission’s past enforcement work and authority in this area, including cases involving automated programs on social media that mimic the activity of real people.

According to one oft-cited estimate, over 37% of all Internet traffic is not human and is instead the work of bots designed for either good or bad purposes.  Legitimate uses for bots vary: crawler bots collect data for search engine optimization or market analysis; monitoring bots analyze website and system health; aggregator bots gather information and news from different sources; and chatbots simulate human conversation to provide automated customer support.

Social media bots are simply bots that run on social media platforms, where they are common and have a wide variety of uses, just as with bots operating elsewhere.  Often shortened to “social bots,” they are generally described in terms of their ability to emulate and influence humans.

The Department of Homeland Security describes them as programs that “can be used on social media platforms to do various useful and malicious tasks while simulating human behavior.”  These programs use artificial intelligence and big data analytics to imitate legitimate activities.

According to the Report, “good” social media bots – which generally do not pretend to be real people – may provide notice of breaking news, alert people to local emergencies, or encourage civic engagement (such as volunteer opportunities).  Malicious ones, the Report states, may be used for harassment or hate speech, or to distribute malware.  In addition, bot creators may be hijacking legitimate accounts or using real people’s personal information.

The Report states that a recent experiment by the NATO Strategic Communications Centre of Excellence concluded that more than 90% of social media bots are used for commercial purposes, some of which may be benign – like chatbots that facilitate company-to-customer relations – while others are illicit, such as when influencers use them to boost their supposed popularity (which correlates with how much money they can command from advertisers) or when online publishers use them to increase the number of clicks an ad receives (which allows them to earn more commissions from advertisers).

Such misuses generate significant ad revenue.

“Bad” social media bots can also be used to distribute commercial spam containing promotional links and facilitate the spread of fake or deceptive online product reviews.

At present, it is cheap and easy to manipulate social media.  Bots have remained attractive for these reasons and because they are still hard for platforms to detect, are available at different levels of functionality and sophistication, and are financially rewarding to buyers and sellers.

Using social bots to generate likes, comments, or subscribers would generally contradict the terms of service of many social media platforms.  Major social media companies have made commitments to better protect their platforms and networks from manipulation, including the misuse of automated bots.  Those companies have since reported on their actions to remove or disable billions of inauthentic accounts.

The online advertising industry has also taken steps to curb bot and influencer fraud, given the substantial harm it causes to legitimate advertisers.

According to the Report, the computing community is designing sophisticated social bot detection methods.  Nonetheless, malicious use of social media bots remains a serious issue.

In terms of FTC action and authority involving social media bots, the FTC recently announced an enforcement action against a company that sold fake followers, subscribers, views and likes to people trying to artificially inflate their social media presence.

According to the FTC’s complaint, the corporate defendant operated websites on which people bought these fake indicators of influence for their social media accounts.  The corporate defendant allegedly filled over 58,000 orders for fake Twitter followers from buyers who included actors, athletes, motivational speakers, law firm partners and investment professionals.  The company allegedly sold over 4,000 bogus subscribers to operators of YouTube channels and over 32,000 fake views for people who posted individual videos – such as musicians trying to inflate their songs’ popularity.

The corporate defendant also allegedly also sold over 800 orders of fake LinkedIn followers to marketing and public relations firms, financial services and investment companies, and others in the business world.  The FTC’s complaint states that followers, subscribers and other indicators of social media influence “are important metrics that businesses and individuals use in making hiring, investing, purchasing, listening, and viewing decisions.” Put more simply, when considering whether to buy something or use a service, a consumer might look at a person’s or company’s social media.

According to the FTC, a bigger following might impact how the consumer views their legitimacy or the quality of that product or service.  As the complaint also explains, faking these metrics “could induce consumers to make less preferred choices” and “undermine the influencer economy and consumer trust in the information that influencers provide.”

The FTC further states that when a business uses social media bots to mislead the public in this way, it could also harm honest competitors.

The Commission alleged that the corporate defendant violated the FTC Act by providing its customers with the “means and instrumentalities” to commit deceptive acts or practices.  That is, the company’s sale and distribution of fake indicators allowed those customers “to exaggerate and misrepresent their social media influence,” thereby enabling them to deceive potential clients, investors, partners, employees, viewers, and music buyers, among others.  The corporate defendant was therefor charged with violating the FTC Act even though it did not itself make misrepresentations directly to consumers.

The settlement banned the corporate defendant and its owner from selling or assisting others in selling social media influence.  It also prohibits them from misrepresenting or assisting others to misrepresent, the social media influence of any person or entity or in any review or endorsement.  The order imposes a $2.5 million judgment against its owner – the amount he was allegedly paid by the corporate defendant or its parent company.

The aforementioned case is not the first time the FTC has taken action against the commercial misuse of bots or inauthentic online accounts.  Indeed, such actions, while previously involving matters outside the social media context, have been taking place for more than a decade.

For example, the Commission has brought three cases – against Match.com, Ashley Madison, and JDI Dating – involving the use of bots or fake profiles on dating websites.  In all three cases, the FTC alleged in part that the companies or third parties were misrepresenting that communications were from real people when in fact they came from fake profiles.

Further, in 2009, the FTC took action against am alleged rogue Internet service provider that hosted malicious botnets.

All of this enforcement activity demonstrates the ability of the FTC Act to adapt to changing business and consumer behavior as well as to new forms of advertising.

Although technology and business models continue to change, the principles underlying FTC enforcement priorities and cases remain constant.  One such principle lies in the agency’s deception authority.

Under the FTC Act, a claim is deceptive if it is likely to mislead consumers acting reasonably in the circumstances, to their detriment.  A practice is unfair if it causes or is likely to cause substantial consumer injury that consumers cannot reasonably avoid and which is not outweighed by benefits to consumers or competition.

The Commission’s legal authority to counteract the spread of “bad” social media bots is thus powered but also constrained by the FTC Act, pursuant to which the FTC would need to show in any given case that the use of such bots constitute a deceptive or unfair practice in or affecting commerce.

The FTC will continue its monitoring of enforcement opportunities in matters involving advertising on social media as well as the commercial activity of bots on those platforms.

Commissioner Rohit Chopra issued a statement regarding the “viral dissemination of disinformation on social media platforms.” And the “serious harms posed to society.”  “Social media platforms have become a vehicle to sow social divisions within our country through sophisticated disinformation campaigns.  Much of this spread of intentionally false information relies on bots and fake accounts,” Chopra states.

Commissioner Chopra states that “bots and fake accounts contribute to increased engagement by users, and they can also inflate metrics that influence how advertisers spend across various channels.”  “[T]he ad-driven business model on which most platforms rely is based on building detailed dossiers of users.  Platforms may claim that it is difficult to detect bots, but they simultaneously sell advertisers on their ability to precisely target advertising based on extensive data on the lives, behaviors, and tastes of their users … Bots can also benefit platforms by inflating the price of digital advertising.   The price that platforms command for ads is tied closely to user engagement, often measured by the number of impressions.”

Click here to read the Report.


© 2020 Hinch Newman LLP

“OK, Boomer!”: Not Okay In the Office

As recently highlighted by the New York Times, a new phrase emblematic of the real or perceived “War Between the Generations” has gone viral: “OK, Boomer!”  The phrase, popularized on the Internet and, in particular, Twitter by Generation Z and Millennials, has been used to dismiss baby boomers’ thoughts and opinions, sometimes viewed by younger generations as paternalistic or just out of step.

And, the phrase isn’t just living in Twitter feeds and the comments sections of opinion pieces.  There is “OK, Boomer!” merchandise and, just last week, a 25 year-old member of the New Zealand Parliament used the phrase to dismiss a fellow lawmaker’s perceived heckling during a debate about climate change.

While many may find “OK, Boomer!” a harmless way to point out generational differences, the phrase’s popularity could lead to problems once it creeps into the workplace.  Age (over 40) is a protected category under both California law (i.e., the Fair Employment and Housing Act) and federal law (i.e., the Age Discrimination in Employment Act).  Whether the speaker is well-intentioned or not, dismissive attitudes about older workers could form the basis of claims for discrimination and/or harassment.  And, as one radio host recently opined, the phrase “OK, Boomer!” may be regarded by some as an outright slur.

Generation Z and Millennial employees understand that using derogatory or dismissive comments related to gender, race, religion, national origin, disability and sexual orientation are inappropriate.  Yet, for some reason, some may not have made the leap with regard to insidious/disparaging comments about a co-worker’s age.  Given the prevalence of age discrimination lawsuits, employers should take heed and consider reminding their workforce about the impropriety of this and other age-related phrases, and train their employees to leave the generation wars at the door.


© 2019 Proskauer Rose LLP.

For more on employment discrimination see the National Law Review Labor & Employment law page.

China’s TikTok Facing Privacy & Security Scrutiny from U.S. Regulators, Lawmakers

Perhaps it is a welcome reprieve for Facebook, Google and YouTube. A competing video-sharing social media company based in China has drawn the attention of U.S. privacy officials and lawmakers, with a confidential investigation under way and public hearings taking place on Capitol Hill.

Reuters broke the story that the Treasury Department’s Committee on Foreign Investment in the United States (CFIUS) is conducting a national security review of the owners of TikTok, a social media video-sharing platform that claims a young but formidable U.S. audience of 26.5 million users. CFIUS is engaged in the context of TikTok owner ByteDance Technology Co.’s $1 billion acquisition of U.S. social media app Musical.ly two years ago, a deal ByteDance did not present to the agency for review.

Meanwhile, U.S. legislators are concerned about censorship of political content, such as coverage of protests in Hong Kong, and the location and security of personal data the company stores on U.S. citizens.

Sen. Josh Hawley (R-Mo.), Chairman of the Judiciary Committee’s Subcommittee on Crime and Terrorism, invited TikTok and others to testify in Washington this week for hearings titled “How Corporations and Big Tech Leave Our Data Exposed to Criminals, China, and Other Bad Actors.”

While TikTok did not send anyone to testify, the company’s recently appointed General Manager for North America and Australia Vanessa Pappas, formerly with YouTube, sent a letter indicating that it did not store data on U.S. citizens in China. She explained in an open letter on the TikTok website, which reads similarly to that reportedly sent to the subcommittee, that the company is very much aware of its privacy obligations and U.S. regulations and is taking a number of measures to address its obligations.

For nearly eight years Pappas served as Global Head of Creative Insights and before that Audience Development for YouTube. In late 2018 she was strategic advisor to ByteDance, and in January 2019 became TikTok’s U.S. General Manager. In July her territory expanded to North America and Australia. Selecting someone who played such a leadership position for YouTube, widely used and familiar to Americans, to lead U.S. operations may serve calm the nerves of U.S. regulators. But given U.S. tensions with China over trade, security and intellectual property, TikTok and Pappas have a way to go.

Some commentators think Facebook must enjoy watching TikTok getting its turn in the spotlight, especially since TikTok is a growing competitor to Facebook in the younger market. If just briefly, it may divert attention away from the attention being paid globally to the social media giant’s privacy and data collection practices, and the many fines.

It’s clear that TikTok has Facebook’s attention. TikTok, which allows users to create and share short videos with special effects, did a great deal of advertising on Facebook. The ads were clearly targeting the teen demographic and were apparently successful. CEO Mark Zuckerberg recently said in a speech that mentions of the Hong Kong protests were censored in TikTok feeds in China and to the United States, something TikTok denied. In a case of unfortunate timing, Zuckerberg this week posted that 100 or so software developers may have improperly accessed Facebook user data.

Since TikTok is largely a short-video sharing application, it competes at some level with YouTube in the youth market. In the third quarter of 2019, 81 percent of U.S. internet users aged 15 to 25 accessed YouTube, according to figures collected by Statista. YouTube boasts more than 126 million monthly active users in the U.S., 100 million more than TikTok.

Potential counterintelligence ‘we cannot ignore’

Last month, U.S. Senate Minority Leader Chuck Schumer (D-NY) and Senator Tom Cotton (R-AR) asked Acting Director of National Intelligence to conduct a national security probe of TikTok and other Chinese companies. Expressing concern about the collection of user data, whether the Chinese government censors content feeds to the U.S., as Zuckerberg suggested, and whether foreign influencers were using TikTok to advance their objectives.

“With over 110 million downloads in the U.S. alone,” the Schumer and Cotton letter read, “TikTok is a potential counterintelligence threat we cannot ignore. Given these concerns, we ask that the Intelligence Community conduct an assessment of the national security risks posed by TikTok and other China-based content platforms operating in the U.S. and brief Congress on these findings.” They must be happy with Sen. Hawley’s hearings.

In her statement, TikTok GM Pappas offered the following assurances:

  • U.S. user data is stored in the United States with backup in Singapore — not China.
  • TikTok’s U.S. team does what’s best for the U.S. market, with “the independence to do so.”
  • The company is committed to operating with greater transparency.
  • California-based employees lead TikTok’s moderation efforts for the U.S.
  • TikTok uses machine learning tools and human content reviews.
  • Moderators review content for adherence to U.S. laws.
  • TikTok has a dedicated team focused on cybersecurity and privacy policies.
  • The company conducts internal and external reviews of its security practices.
  • TikTok is forming a committee of users to serve them responsibly.
  • The company has banned political advertising.

Both TikToc and YouTube have been stung by failing to follow the rules when it comes to the youth and children’s market. In February, TikTok agreed to pay $5.7 million to settle the FTC’s case which allege that, through the Musical.ly app, TikTok company illegally collected personal information from children. At the time it was the largest civil penalty ever obtained by the FTC in a case brought under the Children’s Online Privacy Protection Act (COPPA). The law requires websites and online services directed at children obtain parental consent before collecting personal information from kids under 13. That record was smashed in September, though, when Google and its YouTube subsidiary agreed to pay $170 million to settle allegations brought by the FTC and the New York Attorney General that YouTube was also collecting personal information from children without parental consent. The settlement required Google and YouTube to pay $136 million to the FTC and $34 million to New York.

Quality degrades when near-monopolies exist

What I am watching for here is whether (and how) TikTok and other social media platforms respond to these scandals by competing on privacy.

For example, in its early years Facebook lured users with the promise of privacy. It was eventually successful in defeating competitors that offered little in the way of privacy, such as MySpace, which fell from a high of 75.9 million users to 8 million today. But as Facebook developed a dominant position in social media through acquisition of competitors like Instagram or by amassing data, the quality of its privacy protections degraded. This is to be expected where near-monopolies exist and anticompetitive mergers are allowed to close.

Now perhaps the pendulum is swinging back. As privacy regulation and publicity around privacy transgressions increase, competitive forces may come back into play, forcing social media platforms to compete on the quality of their consumer privacy protections once again. That would be a great development for consumers.

 


© MoginRubin LLP

ARTICLE BY Jennifer M. Oliver of MoginRubin.
Edited by Tom Hagy for MoginRubin LLP.
For more on social media app privacy concerns, see the National Law Review Communications, Media & Internet law page.

Social Media Scrutiny on Visa Applications

On May 31, 2019, the Department of State added new questions to Forms DS-160/DS-156 Nonimmigrant Visa Application and Form DS-260, Immigrant Visa Application. These additional questions require the foreign national to disclose social media platforms they have used within the past five years, as well as provide their username(s) for each platform. Passwords for these accounts do not have to be disclosed and should not be provided. Additional questions request the visa applicant’s current e-mail and phone number, in addition to contact information for the previous five years. If applicants are unable to recall precise details, they may insert “unknown,” but should be prepared for the possibility of additional screening during the visa process. Please note, this a question that must be answered as fully as possible by the Foreign National. Not providing the requested details could result in denial or quite possibly the denial of subsequent immigration applications.

Forms DS-160/DS-156 and DS-260 are the online applications used by individuals seeking a nonimmigrant or immigrant visa from the U.S. Department of State. Completion of the forms is the first step in the process with the Department of State, and must be submitted before scheduling and attending the visa interview. The Department of State has stated that the changes are intended “to improve … screening processes to protect U.S. citizens, while supporting legitimate travel to the United States,” as well as “vetting … applicants and confirming their identity.”

Further, on September 4, 2019, the Department of Homeland Security proposed a federal rule to add similar social media questions to several forms, including the applications for naturalization, advance parole, adjustment of status, asylum, and to remove conditions on permanent residents, along with many others. Additionally, applicants for the Electronic System for Travel Authorization (ESTA) and the Electronic Visa Update System (EVUS), used for frequent international travel, are included in the proposed rule.

These changes stem from the President’s March 6, 2017 Executive Order, requesting heightened screening and vetting of visa applicants. The March 2017 Executive Order requested that the Secretary of State, the Attorney General, the Secretary of Homeland Security and the Director of National Intelligence create “a uniform baseline for screening and vetting standards and procedures.” The addition of the social media and contact information requirements to these application forms is part of the Department of State’s response to that Order. This represents a step up for the Department of State, which previously only asked that applicants voluntarily provide their social media information.

An individual’s social media content can be easily taken out of context, even more so when the postings are from long ago and/or are in a foreign language. Social media also provides an individual’s history of contacts, associations and preferences. While much (justifiable) concern has been expressed about the scrutiny of foreign nationals’ associations and political speech, many social media platforms and the posts thereon will provide information on a foreign national’s employment history and residency. Employment history and residency information can be particularly relevant in employment-based nonimmigrant and immigrant visa applications, such as the H-1B, L-1A and I-140 petitions. These details can also be very important in that the Department of State can use them to compare the information on social media to the information contained in the visa applications. Any discrepancies in that information can lead to difficulty in successfully obtaining both nonimmigrant and immigrant visas. Possible discrepancies can lead to delays in processing, requests for additional information, increased scrutiny in other areas of the application and even denial.

Additionally, many individuals do not keep their social media accounts up to date. As the requested information covers the last five years of the applicant’s social media history (including those accounts that may be closed at the time of the application) information is likely to be out of date, incomplete and out of context. Further, the tendency to embellish employment history or to inadvertently misstate employer information (e.g., indicating Company A as the employer while actually working for placement agency Company B that has been assigned to Company A) can work against an applicant. Both of these scenarios can result in the Department of State obtaining information contradictory to the nonimmigrant or immigrant form and can create obstacles to obtaining the desired visa.

Accordingly, it is imperative that foreign nationals are cognizant of the information they are posting on their social media accounts regarding their residency and employment history, paying particular attention that information contained on the social media platforms is consistent with the information contained in the visa applications.


© 2019 Vedder Price

For more on visa application requirements, see the National Law Review Immigration Law section.

Can You Spy on Your Employees’ Private Facebook Group?

For years, companies have encountered issues stemming from employee communications on social media platforms. When such communications take place in private groups not accessible to anyone except approved members, though, it can be difficult for an employer to know what actually is being said. But can a company try to get intel on what’s being communicated in such forums? A recent National Labor Relations Board (NLRB) case shows that, depending on the circumstances, such actions may violate labor law.

At issue in the case was a company that was facing unionizing efforts by its employees. Some employees of the company were members of a private Facebook group and posted comments in the group about potentially forming a union. Management became aware of this activity and repeatedly asked one of its employees who had access to the group to provide management with reports about the comments. The NLRB found this conduct to be unlawful and held: “It is well-settled that an employee commits unlawful surveillance if it acts in a way that is out of the ordinary in order to observe union activity.”

This case provides another reminder that specific rules come into play when employees are considering forming a union. Generally, companies cannot:

  • Threaten employees based on their union activity
  • Interrogate workers about their union activity, sentiments, etc.
  • Make promises to employees to induce them to forgo joining a union
  • Engage in surveillance (i.e., spying) on workers’ union organizing efforts

The employer’s “spying” in this instance ran afoul of these parameters, which can have costly consequences, such as overturned discipline and backpay awards.


© 2019 BARNES & THORNBURG LLP

For more on employees’ social media use, see the National Law Review Labor & Employment law page.

To Stalk or Not to Stalk . . . That Is the Question – Using Social Media for Applicant Review

Now more than ever, employers are using social media to screen job applicants. According to a 2018 survey, 70 percent of employers use social media to research candidates. Using social media to research job applicants can provide you with useful information, but it can also get you into trouble.

When you review an applicant’s social media account, such as Facebook, LinkedIn, Twitter, etc., you may learn information regarding the applicant’s race, sex, religion, national origin, or age, among other characteristics.

As our readers are aware, a variety of state and federal laws, such as Title VII of the Civil Rights Act of 1964, the Age Discrimination in Employment Act, and the Americans with Disabilities Act prohibit employers from choosing not to hire a candidate based on a number of legally protected classes. Just as it would be unlawful to ask an applicant if he or she has a disability during an interview and fail to hire that applicant based on his or her disability, it would also be illegal not to hire an applicant because you observed a Facebook post in which she expressed her hope to be pregnant within the next six months.

Consider the following best practice tips for using social media to screen applicants:

  1. Develop a Policy and Be Consistent – Implement a policy detailing which social media websites you will review, the purpose of the review and type of information sought, at what stage the review will be conducted, and how much time you will spend on the search. Applying these policies consistently will help to combat claims of discriminatory hiring practices should they arise.
  2. Document Your Findings – Save what you find, whether it is a picture or a screenshot of a comment the applicant made. What you find on social media can disappear as easily as you found it. Protect your decision by documenting what you find. In case the matter is litigated, it can be produced later.
  3. Wait Until After the Initial Interview – Avoid performing a social media screening until after the initial interview. It is much easier to defend a decision not to interview or hire an applicant if you do not have certain information early on.
  4. Follow FCRA Requirements – If you decide to use a third party to perform social media screening services, remember that these screenings are likely subject to the Fair Credit Reporting Act requirements because the screening results constitute a consumer report. This means the employer will be required to: 1) inform the applicant of the results that are relevant to its decision not to hire; 2) provide the applicant with the relevant social media document; 3) provide the applicant notice of his or her rights under the FCRA and; 4) allow the applicant to rebut the information before making a final decision.
  5. Do Not Ask for Their Password – Many states have enacted laws that prohibit an employer from requesting or requiring applicants to provide their login credentials for their social media and other internet accounts. Although some states still allow this, the best practice is not to ask for it. Further, while it is not illegal to friend request a job applicant, proceed with caution. Friend requesting a job applicant (and assuming the applicant accepts the request) may provide you with greater access to the applicant’s personal life. Many people categorize portions of their profiles as private, thereby protecting specific information from the public’s view. If you receive access to this information you may gain more knowledge regarding the applicant’s protected characteristics. If you are going to friend request applicants, you should include this in your written policy and apply this practice across the board.

© 2019 Foley & Lardner LLP

More information for employers considering job applicants on the National Law Review Labor & Employment law page.

FTC Attorney on Endorsement Guide Compliance

Influencer marketing and review websites have attracted a great deal of attention recently by states and federal regulatory agencies, including the FTC.  The FTC’s Endorsement Guides addresses the application of Section 5 of the FTC Act to the use of endorsements and testimonials in advertising.

At their core, the FTC Endorsement Guides (the “Guides”) reflect the basic truth-in-advertising principle that endorsements must be honest and not misleading.  The Guides suggest several best practices, including, but not limited to the following:

  1. Influencers must be legitimate and bona fide users, and endorsements must reflect honest opinions.
  2. Endorsers cannot make claims about a product that would require proof the advertiser does not have.  Blogger and brands are potentially subject to liability for claims with no reasonable basis therefor.
  3. Clearly and conspicuously disclose material connections between advertisers and endorsers (e.g., a financial or family relationship with a brand)
  4. To make a disclosure “clear and conspicuous,” advertisers should use plain and unambiguous language and make the disclosure stand out.  Consumers should be able to notice the disclosure easily.  They should not have to look for it.  Generally speaking, disclosures should be close to the claims to which they relate; in a font that is easy to read; in a shade that stands out against the background; for video ads, on the screen long enough to be noticed, read, and understood; and for audio disclosures, read at a cadence that is easy for consumers to follow and in words consumers will understand.
  5. Never assume that a social media platform’s disclosure tool is sufficient.  Some platforms’ disclosure tools are insufficient.  Placement is key.
  6. Avoid ambiguous disclosures like #thanks, #collab, #sp, #spon or #ambassador.  Clarity is crucial.  Material connection disclosures must be clear and unmistakable.
  7. Do not rely on a disclosure placed after a CLICK MORE link or in another easy-to-miss location.
  8. Advertisers that use bloggers and other social media influencers to promote products are responsible for implementing reasonable training, monitoring and compliance programs (e.g., educating members about claim substantiation requirements and disclosing material connections, searching for what people are saying and taking remedial action).
  9. Statements like “Results not typical” or “Individual results may vary” are likely to be interpreted to mean that the endorser’s experience reflects what others can also expect.  Therefore, advertisers must have adequate proof to back up the claim that the results shown in the ad are typical, or clearly and conspicuously disclose the generally expected performance in the circumstances shown in the ad.
  10. Brands can ask customers about their experiences and feature their comments in ads.  If they have no reason to expect compensation or any other benefit before they give their comments, consult with an FTC CID and defense attorney to assess whether a disclosure is necessary.  If customers have been provided with a reason to expect a benefit from providing their thoughts about a product, a disclosure is probably necessary.

What about affiliate marketers with links to online retailers on their websites that get compensated for clicks or purchases?  According the FTC, the material relationship to the brand  must be clearly and conspicuously so that readers will be able to decide how much weight to give the endorsement.  In some instances – like when the affiliate link is embedded in a product review – a single disclosure may be adequate.

When the review has a clear and conspicuous disclosure of a material relationship and the reader can see both the review containing that disclosure and the link at the same time, readers may have the information they need.  However, if the product review containing the disclosure and the link are separated, readers may not make the connection.

Never put disclosures in obscure places, behind a poorly labeled hyperlink or in a “terms of service” agreement.  That is not enough.  Neither is placing a disclosure below the review or below the link to the online retailer so readers would have to keep scrolling after they finish reading.

Consumers should be able to notice disclosures easily.

U.S. regulators are not the only ones policing influencer disclosures.  In fact, the Competition and Markets Authority, the British government agency that regulates advertising, recently sent numerous warning letters to British celebrities and other social media influencers.  The CMA has also recently released its guidelines for influencers.

The FTC has already demonstrated that it monitors accounts of popular influencers.  It has also demonstrated that it can and will initiate investigations and enforcement actions.  Brands are well-advised to review promotional practices, implement written policies and monitoring protocols.


© 2019 Hinch Newman LLP

For more on influencers, endorsement & advertising, see the National Law Review Communications, Media & Internet law page.

How Social Media Impacted the Teenage Juul Epidemic: Study Recommends Strict FDA Control

BMJ’s journal, Tobacco Control, just released a study recommending that the FDA do more to control Juul’s e-cigarette advertising in social media. The study included a review of over 15000 posts in a three-month period during 2018. Approximately 30% of reviewed posts were promotional, e.g., leading to Juul purchase locations, and over half the posts included “youth” and “youth lifestyle” themes. Because many of these posts were re-posts or user-generated, rather than ads specifically placed by Juul, the company protested that 99% were third-party content over which Juul had no control. However, the intended goal for social media advertising is to “share” and to inspire creation of third-party user-generated content that is also shared. Juul’s public comments weirdly suggest they don’t understand social media advertising. That is quite unlikely.

Juul first came under fire for its youth-focused advertising back in 2016, but has only recently made changes to restrict it. Not until late 2018, long after being called-out by educational and government agencies for targeting youth, did it begin to materially limit its social media accounts and social media messaging.

Juul’s chief administrative officer, Ashley Gould, was quoted last year telling CNN that Juul was “completely surprised by the youth usage of the product.” (Source: CNN.) In response, Dr. Robert Jackler, founder of the Stanford Research into the Impact of Tobacco Advertising, said, “I don’t believe that, not for a minute, because they’re also a very digital, very analytical company,” he added. “They know their market. They know what they’re doing.”

Gould’s obfuscation about underage users doesn’t fool people in the know—and it certainly doesn’t generate trust that Juul will voluntarily follow ethical practices. Juul only instituted its recent changes to restrict youth advertising after FDA scrutiny and bad press.

Juul also advertises its products are for smoking cessation. Last week, in response to San Francisco’s imminent ban on e-cigarette sales, Juul raised concerns that people would resort back to traditional cigarettes—implying this would further negatively impact the health of San Franciscans.

Unfortunately for Juul, the internet remembers everything. In a 2015 Verge interview at the beginning of Juul’s meteoric rise, one of Juul’s R&D engineers made it clear that Juul didn’t care about smoking cessation nor had any concerns about creating an addictive product. The engineer (Atkins) was quoted saying, “We don’t think a lot about addiction here because we’re not trying to design a cessation product at all,” he said, “anything about health is not on our mind.”

Juul’s public “feint and parry” strategy tends to mirror the traditional tobacco industry—a group with a sordid history of youth-focused advertising, concealment, lying to officials, and purposely creating highly addictive products in order to boost sales. It took multiple lawsuits and the Master Settlement Agreement of the nineties for big tobacco to materially comply with government regulations.

Courtesy of Trinkets & Trash Rutgers School of Public Health

Unfortunately, despite all of that history, the tobacco industry’s disregard for consumer protection has spread into the e-cigarette industry. As late as 2017, big tobacco-owned e-cigarette, Blu, launched its “Something Better” advertising campaign. The campaign mocked government-mandated package warnings on traditional cigarettes. The ads included variations of the following text and were designed to look like cigarette warning labels:

“Important: Contains flavor;”
“Important: Vaping blu smells good”
“Important: No ashtrays needed”

The parody on government-mandated safety warnings mocks consumer protection efforts by government agencies—a tactic not surprising coming from a tobacco company. Right now, there is very little regulation over e-cigarettes despite the fact that the FDA was granted oversight in 2016. Like Blu, Juul also has heavy ties to big tobacco. Altria, parent company to Phillip Morris, the maker of Marlboro, is heavily invested in Juul.

If Juul truly intends to address social media advertising, consumer protection, and youth e-cigarette use, it must do more than spew rhetoric through the media. It must take incisive, prophylactic action to reduce exposure of its products to underage users. If history is any indication, that won’t happen without strict FDA regulation.

If you or someone you know has become seriously addicted to nicotine in e-cigarettes, has health problems associated with e-cigarettes, or has been injured by a malfunctioning e-cigarette, you should contact an experienced e-cigarette injury attorney to advise you on the ability to seek compensation for your injuries.

COPYRIGHT © 2019, STARK & STARK
For more on nicotine product regulation see the National Law Review Consumer Protection page.

Fake Followers; Real Problems

Fake followers and fake likes have spread throughout social media in recent years.  Social media platforms such as Facebook and Instagram have announced that they are cracking down on so-called “inauthentic activity,” but the practice remains prevalent.  For brands advertising on social media, paying for fake followers and likes is tempting—the perception of having a large audience offers a competitive edge by lending the brand additional legitimacy in the eyes of consumers, and the brand’s inflated perceived reach attracts higher profile influencers and celebrities for endorsement deals.  But the benefits come with significant legal risks.  By purchasing fake likes and followers, brands could face enforcement actions from government agencies and false advertising claims brought by competitors.

Groundbreaking AG Settlement: Selling Fake Engagement Is Illegal

On January 30, 2019, the New York Attorney General announced a settlement prohibiting Devumi LLC from selling fake followers and likes on social media platforms.  Attorney General Letitia James announced that the settlement marked “the first finding by a law enforcement agency that selling fake social media engagement and using stolen identities to engage in online activity is illegal.”[i] 

Devumi’s customers ranged from actors, musicians, athletes, and modeling agencies to businesspeople, politicians, commentators, and academics, according to the settlement.  Customers purchased Devumi’s services hoping to show the public that they or their products were more popular (and by implication, more legitimate) than they really were.  The AG said Devumi’s services “deceived and attempted to affect the decision-making of social media audiences, including: other platform users’ decisions about what content merits their own attention; consumers’ decisions about what to buy; advertisers’ decisions about whom to sponsor; and the decisions by policymakers, voters, and journalists about which people and policies have public support.”[ii]

Although the Devumi settlement did not impose a monetary punishment, it opened the doors for further action against similar services, and the AG warned that future perpetrators could face financial penalties.

Buyers Beware

Although the New York AG’s settlement with Devumi only addressed sellers of fake followers and likes, companies buying the fake engagement could also face enforcement actions from government agencies and regulatory authorities.  But the risk doesn’t end there—brands purchasing fake engagement could become targets of civil suits brought by competitors, where the potential financial exposure could be much greater.

Competing brands running legitimate social media marketing campaigns, and who are losing business to brands buying fake likes and followers, may be able to recover through claims brought under Lanham Act and/or state unfair competition laws, such as California’s Unfair Competition Law (“UCL”).[iii] 

The Lanham Act imposes liability upon “[a]ny person who, on or in connection with any goods or services, … uses in commerce any … false or misleading description of fact, or false or misleading representation of fact, which … is likely to … deceive as to the … sponsorship, or approval of his or her goods, services, or commercial activities by another person” or “in commercial advertising … misrepresents the nature, characteristics, qualities, or geographic origin of … goods, services, or commercial activities.”[iv]

Fake likes on social media posts could constitute false statements about the “approval of [the advertiser’s] goods, services, or commercial activities” under the Lanham Act.  Likewise, a fake follower count could misrepresent the nature or approval of “commercial activities,” deceiving the public into believing a brand is more popular among consumers than it is.

The FTC agrees that buying fake likes is unlawful.  It publishes guidelines to help the public understand whether certain activities could violate the FTC Act.  In the FAQ for the Endorsement Guides, the FTC states, “an advertiser buying fake ‘likes’ is very different from an advertiser offering incentives for ‘likes’ from actual consumers.  If ‘likes’ are from non-existent people or people who have no experience using the product or service, they are clearly deceptive, and both the purchaser and the seller of the fake ‘likes’ could face enforcement action.” (emphasis added).[v]  

Although there is no private right of action to enforce FTC Guidelines, the Guidelines may inform what constitutes false advertising under the Lanham Act.[vi]  Similarly, violations of the FTC Act (as described in FTC Guidelines) may form the basis of private claims under state consumer protection statutes, including California’s UCL.[vii]

While the Devumi settlement paved the way for private lawsuits against sellers of fake social media engagement, buyers need to be aware that they could face similar consequences.  Because of the risk of both government enforcement actions and civil lawsuits brought by competitors, brands should resist the temptation to artificially grow their social media footprint and instead focus on authentically gaining popularity.  Conversely, brands operating legitimately but losing business to competitors buying fake engagement should consider using the Lanham Act and state unfair competition laws as tools to keep the playing field more even.


[i]Attorney General James Announces Groundbreaking Settlement with Sellers of Fake Followers and “Likes” on Social Media, N.Y. Att’y Gen.

[ii] Id.

[iii] Cal. Bus. & Prof. Code § 17200, et seq.

[iv] 15 U.S.C. § 1125(a).

[v] The FTC’s Endorsement Guides: What People Are Asking, Fed. Trade Comm’n (Sept. 2017) .

[vi] See Grasshopper House, LLC v. Clean & Sober Media, LLC, No. 218CV00923SVWRAO, 2018 WL 6118440, at *6 (C.D. Cal. July 18, 2018) (“a ‘plaintiff may and should rely on FTC guidelines as a basis for asserting false advertising under the Lanham Act.’”) (quoting Manning Int’l IncvHome Shopping NetworkInc., 152 F. Supp. 2d 432, 437 (S.D.N.Y. 2001)).

[vii] See Rubenstein v. Neiman Marcus Grp. LLC, 687 F. App’x 564, 567 (9th Cir. 2017) (“[A]lthough the FTC Guides do not provide a private civil right of action, ‘[v]irtually any state, federal or local law can serve as the predicate for an action under [the UCL].’”) (quoting Davis v. HSBC Bank Nev., N.A., 691 F.3d 1152, 1168 (9th Cir. 2012)).

 

© 2019 Robert Freund Law.
This post was written by Robert S. Freund of Robert Freund Law.