Why Correctly Understanding Antitrust Risk is Crucial to Properly Addressing Brand Dilution in the E-Commerce Age

“Run a Google search for the phrase ‘minimum advertised price policy’ and you will find hundreds of policies, posted on a variety of manufacturers’ websites.  Interest in minimum advertised price (‘MAP’) policies has skyrocketed in recent years.”  That is what one of my colleagues wrote in a prescient article in 2013.[i]  Since 2013, the interest in MAP policies has exploded.  But much of the online guidance regarding MAP policies is misguided and clearly has not been crafted or vetted by antitrust counsel.  Manufacturers should proceed with caution and consult with antitrust counsel before adopting a MAP policy.

  1. What is a MAP Policy?

MAP policies impose restrictions on the price at which a product or service may be advertised without restricting the actual sales price.  In the context of print advertising, MAP policies usually concern only off-site advertising, such as in flyers or brochures.  They do not restrict the in-store advertising or sales price offered at the retailer’s “brick and mortar” locations.  In the context of internet advertising, MAP policies often concern pricing advertised by an internet retailer on its website.  But with internet advertising, the distinction between an advertised price and a sales price is often blurry and requires special attention.

  1. What has been driving all the recent interest in MAP?

The e-commerce boom has been one key driver.  Although e-commerce has been a financial boon for some by allowing products to reach broader audiences and conveniently connecting consumers to highly discounted and diversified products, other manufacturers are concerned that they are losing control over their brands and the advertising of their products.  Once premium branded products might be discounted to the point of being considered cheap.  As margins are squeezed, service may suffer and consumers ultimately lose out.

This phenomenon, and how to address it, has attracted massive recent attention, including from the popular press.  In 2017, the Wall Street Journal published an article headlined, Brands Strike Back:  Seven Strategies to Loosen Amazon’s Grip, reporting that a growing number of brands are pushing back on large online retailers by adopting MAP policies.[ii]  The article reported that instituting MAP policies can be effective in decreasing online discounting.  A recent Forbes article similarly recommended that manufacturers adopt MAP policies in response to the emergence of e-commerce sites.[iii]

  1. Popular Misconceptions About MAP.

Public interest in MAP has been great for drawing attention to the usefulness of MAP policies in addressing brand dilution.  But much of the popular discourse about MAP fails to account for the critical legal considerations attendant to adopting and enforcing a MAP policy, and would steer the unwary into legally risky territory.  For example, a sampling of articles online—which will go unattributed—offer the following characterizations in promoting MAP policies:

  • A “MAP policy is an agreement between manufacturers and distributors or retailers”;
  • In a MAP policy, “authorized sellers agree to the policy and in return, the brand agrees to enforce their pricing”;
  • To prevent “margin erosion,” “manufacturers and retailers work together to set a minimum advertised price”;
  • MAP should be “enforced by both” the manufacturer and reseller; and
  • Brands should “control sellers” through “enforceable agreements.”

These suggestions to implement MAP through an “agreement” or in “cooperation” with resellers, and to use MAP to enforce product pricing, may have intuitive appeal.  And in fact, several MAP templates available online are styled as “agreements” between the manufacturer and reseller.  But be warned—these suggestions, if carried out, could pose significant antitrust risk that could subject companies to serious and expensive liability.  The next section explains why.

  1. Quick Antitrust Legal Guide to MAP.

When most people think of illegal antitrust conspiracies, they think of agreements among competitors to fix prices or restrict competition, which are per se illegal.  But in general, manufacturers also may not require their resellers—either distributors or retailers—to resell at (or above) a set price.  This is known as minimum resale price maintenance (“RPM”) and it is also per seillegal under antitrust laws in several states.

Although RPM may be per se illegal under certain state laws, MAP policies are generally analyzed under a more lenient legal framework called the “rule of reason.”  But a MAP policy must be crafted with care to avoid being treated as RPM.  For example, agreements with resellers concerning the minimum advertised price may be viewed, depending on the circumstances, as actually having the effect of setting the minimum sales price, converting the MAP policy into RPM.  A MAP policy also must be adopted free from any agreement with a manufacturer’s horizontal competitors, which could be found to be an unlawful horizontal conspiracy.  In one prominent example, the Federal Trade Commission (“FTC”) brought an enforcement action against five major competing compact disk (“CD”) distributors challenging their MAP policies as violating federal antitrust laws.[iv]  All five major CD distributors had adopted MAP policies around the same time, allegedly at the urging of retailers, and the policies each prohibited all advertising below a certain price, including in-store advertising.  The FTC viewed the policies under those circumstances as horizontal agreements among the distributors, and thus per se illegal.

  1. Practical Antitrust Pointers for MAP.

Several guiding principles can help minimize antitrust risk in adopting a MAP policy:

  • Advertising Only.  A retailer should remain free to sell a product at any price, so that the restriction on advertising is deemed to be a non-price restraint.  In the context of online sales, adhering to this principle can require special care, as some might try to argue that there is little distinction between an advertised price and a sales price.  MAP policies that concern internet advertising thus often include provisions that allow internet retailers to communicate an actual sales price in a different manner—such as “Call for Pricing” or “Add to Cart to See Price.”
  • No Agreement.  A MAP policy should be drafted as a unilateral policy—i.e., a policy that the manufacturer creates on its own (in consultation with antitrust counsel), without input from or agreement with its own competitors or with its downstream resellers.  The policy should expressly state that it is a unilateral policy that does not constitute an agreement.
  • Broad Application.  Policies that apply to all off-site advertising, no matter the form, are more likely to be upheld than policies that are specifically directed at internet retailers.
  • Clarity.  A MAP policy should be user-friendly and easy to understand.  One best practice is to include a Frequently Asked Questions guide to clarify how the policy works.

Antitrust risk must be kept in mind not just when a MAP policy is created, but throughout its implementation and enforcement.  The manner in which a MAP policy is enforced could risk converting the unilateral policy into conduct that could be viewed as a tacit agreement, even if no written agreement is ever signed.  For example, enlisting or “working with” resellers to enforce the policy, as suggested by articles online, could be viewed as evidence that a manufacturer is coordinating with resellers as part of an overall agreement.  Working with competitors to coordinate strategies for MAP enforcement would also pose significant legal risk.  For that reason, manufacturers that are adopting MAP policies should resist communications with resellers or competitors about MAP and continue to work with antitrust counsel through implementation and enforcement.

To be sure, some may believe that coordination, for example, between manufacturers and retailers, is helpful in stamping out e-commerce discounting.  But even if such coordination between manufacturers and retailers could be effective in addressing such discounting, it carries significant legal risks.  And potentially risky agreements with resellers are not a manufacturer’s only option in addressing how its products are advertised in e-commerce.  Other tools are also available and can be adopted in conjunction with MAP and other policies.  As just one example, a unilateral distribution policy, in which a manufacturer unilaterally suspends resellers that sell through unauthorized e-commerce sites, can be a powerful complement to a MAP policy.  It also may present a more direct way to address the e-commerce channels through which goods are (or are not) sold.  Because such policies do not involve prices, if appropriately created and implemented, U.S. courts are likely to also assess them under the lenient “rule of reason.”  It is therefore unsurprising that such policies are gaining in popularity.  One recent study surveying over 1,000 European retailers found that policies precluding or limiting e-commerce sales are widely in place with approximately 18% of respondents reporting that manufacturers limit their ability to sell through online marketplaces or platforms and 11% reporting that manufacturers restrict their online sales to their own website.[v]

Ultimately, addressing brand dilution is critical in the e-commerce age.  It is also highly fact specific and typically requires custom solutions tailored to a company’s commercial and legal objectives.  Adopting an “off the rack” MAP policy and simply hoping for the best is unwise and could lead to expensive litigation or, worse yet, liability and costly penalties.  But antitrust lawyers are here to help companies navigate the legal landscape to come up with commonsense solutions that work while minimizing legal risk.


[i] Erika L. Amarante, A Roadmap to Minimum Advertised Price Policies, 16 The Franchise Lawyer 4 (2013), https://www.wiggin.com/erika-l-amarante/publications/a-roadmap-to-minimum-advertised-price-policies/.

[ii] Ruth Simon, Brands Strike Back:  Seven Strategies to Loosen Amazon’s Grip, Wall St. J., (Aug. 7, 2017),  https://www.wsj.com/articles/brands-strike-back-seven-strategies-to-loosen-amazons-grip-1502103602.

[iii] Danae Vara Borrell, Why Manufacturers Can’t Afford to Ignore Minimum Advertised Price Policies, Forbes Tech. Council (Oct. 17, 2018), https://www.forbes.com/sites/forbestechcouncil/2018/10/17/why-manufacturers-cant-afford-to-ignore-minimum-advertised-price-policies/#167f8d5417ec.

[iv] See In re Sony Entertainment, Inc., No. C-3971, 2000 WL 1257796 (F.T.C. Aug. 30, 2000).

[v] See European Commission, Final Report on the Ecommerce Sector Inquiry, staff working document paragraph 461, http://ec.europa.eu/competition/antitrust/sector_inquiry_swd_en.pdf.

© 1998-2019 Wiggin and Dana LLP

Ericsson Offers FRAND – District Court Endorses Comparable Licenses, Rejects SSPPU Royalty Rate

On May 23, 2019, the court issued a declaratory judgment in the case of HTC v. EricssonNo. 18-cv-00243, pending in the United States District Court for the Eastern District of Texas (Judge Gilstrap). That judgment confirmed that Ericsson’s 4G standard-essential patents (“SEPs”) convey significant value to mobile handsets and held that Ericsson made an offer to HTC that complied with Ericsson’s obligations to license on fair, reasonable, and non-discriminatory (“FRAND”) terms. The decision, published on the heels of Judge Koh’s recent opinion in FTC v. Qualcomm, provides much-needed clarity to SEP owners by definitively rejecting the smallest-saleable patent practicing unit (“SSPPU”) royalty theory in favor of a real-world, market-based approach.

The Dispute

Ericsson owns a large portfolio of cellular patents essential to the 2G, 3G, and 4G standards that it licenses to handset makers worldwide. As a member of the ETSI standard setting organization, Ericsson agreed to license these patents on FRAND terms. Ericsson offered a license to HTC at a rate of $2.50 per 4G device, or 1% of the net device price with a $1 floor and $4 cap. HTC countered with a rate of $0.10 per 4G device. HTC sued Ericsson, claiming that Ericsson’s offered royalty rate was too high, and that Ericsson breached its FRAND commitment.

A jury trial was held in February 2019. HTC argued that a royalty base must be calculated based on the profit margin of the baseband processor (which HTC argued was the SSPPU) rather than the price of the device as a whole. Ericsson argued that HTC’s SSPPU approach dramatically undervalued 4G cellular technology and that Ericsson’s patents in particular were worth far more. After a five-day trial, the jury found that Ericsson’s offers did not breach Ericsson’s commitment to license on FRAND terms and conditions.

The Decision

Following the verdict, the district court also issued its findings of fact and conclusions of law in connection with ruling on Ericsson’s request for a declaratory judgment that it had complied with FRAND. This declaration reaffirmed the jury’s findings, while also addressing more fully some key questions.

First, the court stated unequivocally that the ETSI FRAND commitment does not require a company to license its SEPs based on the profit or cost of the baseband processor or SSPPU.The district court’s decision is consistent with Federal Circuit precedent, such as Ericsson v. D-Link, which holds that “courts must consider the facts of record when instructing the jury and should avoid rote reference to any particular damages formula.”

Second, the order went further to conclude that Ericsson’s 4G portfolio is worth significantly more than a royalty rate based on the profit margin or cost of the baseband processor in HTC’s phones (HTC’s “SSPPU”). Looking to industry-wide evidence, the court held that the value of cellular technology far exceeded a valuation based on the price or profit of a baseband processor. The court found that “Ericsson established, and HTC’s own experts conceded, that there are no examples in the industry of licenses that have been negotiated based on the profit margin, or even the cost, of a baseband processor” and that credible evidence supported a finding that “the profit margin, or even the cost, of the baseband processor is not reflective of the value conferred by Ericsson cellular essential patents.”

Third, the court determined that both of Ericsson’s offers to HTC—(1) $2.50 per 4G device or (2) 1% with a $1 floor and $4 cap—were fair, reasonable, and non-discriminatory. The court found that Ericsson’s “comparable licenses provide the best market-based evidence of the value of Ericsson’s SEPs and that Ericsson’s reliance on comparable licenses is a reliable method of establishing fair and reasonable royalty rates that is consistent with its FRAND commitment.” At trial, evidence was presented regarding Ericsson’s licenses with Apple, BLU, Coolpad, Doro, Fujitsu, Huawei, Kyocera, LG, Panasonic, Samsung, Sharp, Sony, and ZTE. The court noted that several of Ericsson’s licenses contained express terms that were “similar or substantially similar” to Ericsson’s offers to HTC and rejected the argument that Ericsson’s offers to HTC were discriminatory.

Why It Matters

Judge Gilstrap’s declaration represents an important development in FRAND case law that looks to industry practice and market evidence rather than untested licensing theories. It affirms that basing a rate on comparable licenses is an acceptable FRAND methodology.

The decision also rejects the SSPPU royalty theory. Some have read the recent FTC v. Qualcommopinion to suggest that a FRAND royalty must be structured as a percentage rate on a baseband processor. Judge Gilstrap’s declaration demonstrates why such a reading is incorrect.  First, the declaration explains that the ETSI FRAND commitment simply does not require a SSPPU royalty base. Second, even if one were to indulge the SSPPU approach, the SSPPU for many standard-essential patents is not limited to a baseband processor. Third, a wealth of market evidence shows that Ericsson’s patents (and standard-essential patents generally) are far more valuable than a baseband processor-based royalty would reflect.

© McKool Smith
This article was written by Nicholas Mathews from McKool Smith.

The Tor Browser Afforded CDA Immunity for Dark Web Transactions

The District of Utah ruled in late May that Section 230 of the Communications Decency Act, 47 U.S.C. §230 (“CDA”) shields The Tor Project, Inc. (“Tor”), the organization responsible for maintaining the Tor Browser, from claims for strict product liability, negligence, abnormally dangerous activity, and civil conspiracy.

The claims were asserted against Tor following an incident where a minor died after taking illegal narcotics purchased from a site on the “dark web” on the Tor Network. (Seaver v. Estate of Cazes, No. 18-00712 (D. Utah May 20, 2019)). The parents of the child sued, among others, Tor as the service provider through which the teenager was able to order the drug on the dark web. Tor argued that the claims against it should be barred by CDA immunity and the district court agreed.

The Onion Router, or “Tor” Network, was originally created by the U.S. Naval Research Laboratory for secure communications and is now freely available for anyone to download from the Tor website.  The Tor Network allows users to access the internet anonymously and allows some websites to operate only within the Tor network. Thus, the Tor Network attempts to provide anonymity protections both to operators of a hidden service and to visitors of a hidden service. The Tor browser masks a user’s true IP address by bouncing user communications around a distributed network of relay computers, called “nodes,” which are run by volunteers around the world. Many people and organizations use the Tor Network for legal purposes, such as for anonymous browsing by privacy-minded users, journalists, human rights organizations and dissidents living under repressive regimes. However, the Tor Network is also used as a forum and online bazaar for illicit activities and hidden services (known as the “dark web”). The defendant Tor Project is a Massachusetts non-profit organization responsible for maintaining the software underlying the Tor browser.

To qualify for immunity under the CDA, a defendant must show that 1) it is an “interactive computer service”; 2) its actions as a “publisher or speaker” form the basis for liability; and 3) “another information content provider” provided the information that forms the basis for liability. The first factor is generally not an issue in disputes where CDA immunity is invoked, as websites or social media platforms typically fit the definition of an “interactive computer service.” The court found that Tor qualified as an “interactive computer service” because it enables computer access by multiple users to computer servers via its Tor Browser.  The remaining factors were straightforward for the court to analyze, as the plaintiff sought to hold Tor liable as the publisher of third-party information (e.g., the listing for the illicit drug).

The outcome was not surprising, given that courts have previously dismissed tort claims against platforms or websites where illicit goods were purchased (such as the recent Armslist case decided by the Wisconsin Supreme Court where claims against a classified advertising website were deemed barred by the CDA).

The questions surrounding the court’s ability to even hear the case also posed interesting jurisdictional questions, as the details of the Tor network are shrouded in anonymity and there are no accurate figures as to how many users or nodes exist within the Utah forum.  The court determined that, under plaintiff’s rough estimation, there were around 3,000-4,000 Utah residents who used Tor daily and perhaps, became part of the service (“Plaintiff has set forth substantial evidence to support the assumption that many of these transactions and relays are occurring in Utah on a daily basis”). In a breezy analysis, the court found that plaintiff had provided sufficient evidence to set forth a prima facie showing that Tor maintains continuous and systematic contacts in the state of Utah so as to satisfy the general jurisdiction standard.

This case is a reminder of the breadth of the CDA, as well as a reminder that many of its applications result in painful and somewhat controversial outcomes.

© 2019 Proskauer Rose LLP.

Article by Stephanie J. Kapinos of Proskauer Rose LLP.

More more on Web & Internet issues see the National Law Review page on Communications, Media & Internet.

 

Forget About Fake News, How About Fake People? California Starts Regulating Bots as of July 1, 2019

California SB 1001, Cal. Bus. & Prof. Code § 17940, et seq., takes effect July 1, 2019. The law regulates the online use of “bots” – computer programs that interact with a human being and give the appearance of being an actual person – by requiring disclosure when bots are being used.

The law applies in limited cases of online communications to (a) sell commercial goods or services, or (b) influence a vote in an election. Specifically, the law prohibits using a bot in those circumstances, “with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election.” Disclosure of the existence of the bot avoids liability.

As more and more companies use bots, artificial intelligence, and voice recognition technology to provide customer service in online transactions, businesses will need to consider carefully how and when to disclose that their helpful (and often anthropomorphized) digital “assistants” are not really human beings.  In a true customer-service situation where the bot is fielding questions about warranty service, product returns, etc., there may be no duty. But a line could be crossed if any upsell is included, such as “Are you interested to learn about our latest line of products?”

Fortunately, the law doesn’t expressly create a private cause of action against violators. However, it remains to be seen if lawsuits nevertheless get brought under general laws prohibiting unfair or deceptive trade practices alleging failure to disclose the existence of a bot.

Also, an exemption applies for online “platforms,” defined as: “any public-facing Internet Web site, Web application, or digital application, including a social network or publication, that has 10,000,000 or more unique monthly United States visitors or users for a majority of months during the preceding 12 months.”  Accordingly, operators of very large online sites or services are exempt.

For marketers who use bots in customer communications – and who are not large enough to take advantage of the “platform” exemption – the time is now to review those practices and decide whether disclosures may be appropriate.

©2019 Greenberg Traurig, LLP. All rights reserved.
For more on Internet & Communications see the National Law Review page on Communications, Media & Internet

Fake Followers; Real Problems

Fake followers and fake likes have spread throughout social media in recent years.  Social media platforms such as Facebook and Instagram have announced that they are cracking down on so-called “inauthentic activity,” but the practice remains prevalent.  For brands advertising on social media, paying for fake followers and likes is tempting—the perception of having a large audience offers a competitive edge by lending the brand additional legitimacy in the eyes of consumers, and the brand’s inflated perceived reach attracts higher profile influencers and celebrities for endorsement deals.  But the benefits come with significant legal risks.  By purchasing fake likes and followers, brands could face enforcement actions from government agencies and false advertising claims brought by competitors.

Groundbreaking AG Settlement: Selling Fake Engagement Is Illegal

On January 30, 2019, the New York Attorney General announced a settlement prohibiting Devumi LLC from selling fake followers and likes on social media platforms.  Attorney General Letitia James announced that the settlement marked “the first finding by a law enforcement agency that selling fake social media engagement and using stolen identities to engage in online activity is illegal.”[i] 

Devumi’s customers ranged from actors, musicians, athletes, and modeling agencies to businesspeople, politicians, commentators, and academics, according to the settlement.  Customers purchased Devumi’s services hoping to show the public that they or their products were more popular (and by implication, more legitimate) than they really were.  The AG said Devumi’s services “deceived and attempted to affect the decision-making of social media audiences, including: other platform users’ decisions about what content merits their own attention; consumers’ decisions about what to buy; advertisers’ decisions about whom to sponsor; and the decisions by policymakers, voters, and journalists about which people and policies have public support.”[ii]

Although the Devumi settlement did not impose a monetary punishment, it opened the doors for further action against similar services, and the AG warned that future perpetrators could face financial penalties.

Buyers Beware

Although the New York AG’s settlement with Devumi only addressed sellers of fake followers and likes, companies buying the fake engagement could also face enforcement actions from government agencies and regulatory authorities.  But the risk doesn’t end there—brands purchasing fake engagement could become targets of civil suits brought by competitors, where the potential financial exposure could be much greater.

Competing brands running legitimate social media marketing campaigns, and who are losing business to brands buying fake likes and followers, may be able to recover through claims brought under Lanham Act and/or state unfair competition laws, such as California’s Unfair Competition Law (“UCL”).[iii] 

The Lanham Act imposes liability upon “[a]ny person who, on or in connection with any goods or services, … uses in commerce any … false or misleading description of fact, or false or misleading representation of fact, which … is likely to … deceive as to the … sponsorship, or approval of his or her goods, services, or commercial activities by another person” or “in commercial advertising … misrepresents the nature, characteristics, qualities, or geographic origin of … goods, services, or commercial activities.”[iv]

Fake likes on social media posts could constitute false statements about the “approval of [the advertiser’s] goods, services, or commercial activities” under the Lanham Act.  Likewise, a fake follower count could misrepresent the nature or approval of “commercial activities,” deceiving the public into believing a brand is more popular among consumers than it is.

The FTC agrees that buying fake likes is unlawful.  It publishes guidelines to help the public understand whether certain activities could violate the FTC Act.  In the FAQ for the Endorsement Guides, the FTC states, “an advertiser buying fake ‘likes’ is very different from an advertiser offering incentives for ‘likes’ from actual consumers.  If ‘likes’ are from non-existent people or people who have no experience using the product or service, they are clearly deceptive, and both the purchaser and the seller of the fake ‘likes’ could face enforcement action.” (emphasis added).[v]  

Although there is no private right of action to enforce FTC Guidelines, the Guidelines may inform what constitutes false advertising under the Lanham Act.[vi]  Similarly, violations of the FTC Act (as described in FTC Guidelines) may form the basis of private claims under state consumer protection statutes, including California’s UCL.[vii]

While the Devumi settlement paved the way for private lawsuits against sellers of fake social media engagement, buyers need to be aware that they could face similar consequences.  Because of the risk of both government enforcement actions and civil lawsuits brought by competitors, brands should resist the temptation to artificially grow their social media footprint and instead focus on authentically gaining popularity.  Conversely, brands operating legitimately but losing business to competitors buying fake engagement should consider using the Lanham Act and state unfair competition laws as tools to keep the playing field more even.


[i]Attorney General James Announces Groundbreaking Settlement with Sellers of Fake Followers and “Likes” on Social Media, N.Y. Att’y Gen.

[ii] Id.

[iii] Cal. Bus. & Prof. Code § 17200, et seq.

[iv] 15 U.S.C. § 1125(a).

[v] The FTC’s Endorsement Guides: What People Are Asking, Fed. Trade Comm’n (Sept. 2017) .

[vi] See Grasshopper House, LLC v. Clean & Sober Media, LLC, No. 218CV00923SVWRAO, 2018 WL 6118440, at *6 (C.D. Cal. July 18, 2018) (“a ‘plaintiff may and should rely on FTC guidelines as a basis for asserting false advertising under the Lanham Act.’”) (quoting Manning Int’l IncvHome Shopping NetworkInc., 152 F. Supp. 2d 432, 437 (S.D.N.Y. 2001)).

[vii] See Rubenstein v. Neiman Marcus Grp. LLC, 687 F. App’x 564, 567 (9th Cir. 2017) (“[A]lthough the FTC Guides do not provide a private civil right of action, ‘[v]irtually any state, federal or local law can serve as the predicate for an action under [the UCL].’”) (quoting Davis v. HSBC Bank Nev., N.A., 691 F.3d 1152, 1168 (9th Cir. 2012)).

 

© 2019 Robert Freund Law.
This post was written by Robert S. Freund of Robert Freund Law.

Using Prior FCC Rulings and Focusing on Human Intervention, Court Finds Texting Platform Is Not An ATDS

In today’s world of ever-conflicting TCPA rulings, it is important to remember that, where courts are asked to determine the TCPA’s ATDS definition, their inquiry will revolve around the question of whether that definition includes only devices that actually generate random or sequential numbers or also devices with a broader range of functionalities.  However, it is also important to remember that, when courts are trying to determine whether a calling/text messaging system meets the ATDS definition, focusing on the level of human intervention used in making a call or sending a text message is a separate decisive inquiry that also must be made.

As we’ve previously mentioned, this latter inquiry is important in all types of TCPA cases, but recently the issue has been given special attention in cases regarding text messages and text messaging platforms.  Indeed, this happened again yesterday when the court in Duran v. La Boom Disco determined a nightclub’s use of text messaging did not violate the TCPA because of the level of human involvement exhibited by the nightclub in operating the software and scheduling the sending of messages.

Background

In Duran v. La Boom Disco, the United States District Court for the Eastern District of New York was tasked with analyzing the ExpressText and EZ Texting platforms, which are text messaging software platforms offered to businesses and franchises, whereby the business can write, program, and schedule text messages to be sent to a curated list of consumer mobile phone numbers.

At first glance, the facts in Duran appear to signal a slam dunk case for the plaintiff.  The defendant nightclub had used the ExpressText and EZ Texting platforms to send marketing text messages to the plaintiff after he replied to a call-to-action advertisement by texting the keyword “TROPICAL” to obtain free admission to the nightclub for a Saturday night event.  Importantly, though, after the plaintiff texted this keyword, he never received a second text messaging asking whether he consented to receive recurring automated text messages (commonly referred to as a “double opt-in” message).  He did, however, receive approximately 100 text messages advertising other events at the nightclub and encouraging him to buy tickets, which ultimately led him to bring a TCPA action against the club.

Accordingly, the initial issue that the Duran court was tasked with deciding was whether the defendant nightclub had texted the plaintiff without his prior express written consent.  The court quickly dispensed with it, determining that the nightclub had not properly obtained written consent from the plaintiff, as it had failed to use a double opt-in process to ensure the plaintiff explicitly agreed to receive recurring automated marketing text message and could not otherwise prove that the plaintiff explicitly consented to receiving recurring messages or a marketing nature (which, under the TCPA, the nightclub had the burden to prove).

At this stage, then, things were looking bad for the nightclub.  However, this was not the end of the court’s analysis, as the nightclub could only be liable for sending these non-consented-to messages if they had been sent using an ATDS.  Thus, the court turned to its second – and much more important – line of inquiry: whether the ExpressText and EZ Texting software, as used by the nightclub to text the plaintiff, qualified as an ATDS.

Defining the ATDS Term in the Aftermath of ACA International

In order to determine whether the ExpressText and EZ Texting platforms met the TCPA’s ATDS definition, the court performed an analysis that has become all too common since the FCC’s 2015 Declaratory Order was struck down in ACA International: determining what the appropriate definition of ATDS actually is.  With respect to this issue, the litigants took the same positions that we typically see advanced.  The plaintiff argued that the ExpressText and EZ Texting platforms were the equivalent of “predictive dialers” that could “dial numbers from a stored list,” which were included within the TCPA’s ATDS definition.  The Nightclub countered that predictive dialers and devices that dialed from a database fell outside of the ATDS definition, meaning the nightclub’s use of the ExpressText and EZ Texting platforms should not result in TCPA liability.

The court began the inquiry with what is now the all-too-familiar analysis of the extent to which the D.C. Circuit’s opinion in ACA International invalidated the FCC’s prior 2003 and 2008 predictive dialer rulings.  After examining the opinion, the court found that those prior rulings still remained intact because “the logic behind invalidating the 2015 Order does not apply to the prior FCC orders.”  The court then concluded that, because the 2003 and 2008 ATDS rulings remained valid, it could use the FCC’s 2003 and 2008 orders to define the ATDS term, and that, based on these rulings, the TCPA also prohibited defendants from sending automated text messages using predictive dialers and/or any dialing system that “dial numbers from a stored list.”

However, the fact that the ExpressText and EZ Texting platforms dialed numbers from a stored list did not end the inquiry since, under the 2003 and 2008 orders, “equipment can only meet the definition of an autodialer if it pulls from a list of numbers, [and] also has the capacity to dial those numbers without human intervention.”  And it was here where the plaintiff’s case fell apart, for while the ExpressText and EX Texting platforms dialed from stored lists and saved databases, these platforms could not dial the stored numbers without a human’s assistance.  As the court explained:

When the FCC expanded the definition of an autodialer to include predictive dialers, the FCC emphasized that ‘[t]he principal feature of predictive dialing software is a timing function.’  Thus, the human-intervention test turns not on whether the user must send each individual message, but rather on whether the user (not the software) determines the time at which the numbers are dialed….  There is no dispute that for the [ExpressText and EZ Texting] programs to function, ‘a human agent must determine the time to send the message, the content of the messages, and upload the numbers to be texted into the system.’

In sum, because a user determines the time at which the ExpressText and EZ Texting programs send messages to recipients, they operate with too much human involvement to meet the definition of an autodialer.

Human Intervention Saves the Day (Again)

In Duran, the district court made multiple findings that would ordinarily signal doom for a defendant: it broadly defined the ATDS term to include predictive dialers and devices that dialed numbers from a stored list/database and it found the nightclub’s text messages to have been sent without appropriately obtaining the plaintiff’s express written consent.  However, despite these holdings, the nightclub was still able to come out victorious because of the district court’s inquiry into the human intervention issue and because the ExpressText and EZ Texting platforms the nightclub used required just enough human involvement to move the systems into a zone of protection.  In many ways, this holding – and the analysis employed – is unique; however, with respect to the focus on the human intervention requirement, the district court’s decision can be seen as another step down a path that has been favorable to web-based text messaging platforms.

Indeed, over the course of the last two years, several courts have made it a point to note that the human intervention analysis is a separate, but equally important, determination that the court must analyze before concluding that a device is or is not an ATDS.  With respect to the text-messaging line of cases, this has especially been the case, with numerous courts noting that, no matter whether the ATDS definition is or is not limited to devices that randomly or sequentially generate numbers, the numbers must also be dialed without human intervention.  What is interesting, though, is that the courts that have interpreted this line of cases have focused on different actions as being the key source of human intervention.

As we already discussed, the court in Duran noted that the key inflection point for determining whether human intervention exists is based off of the timing of the message and whether a human or the device itself gets to determine when the text message is sent out.  And in Jenkins v. mGage, LLC, the District Court for the Northern District of Georgia reached a similar conclusion, finding that the defendant’s use of a text messaging platform involved enough human intervention to bring the device outside of the ATDS definition because “direct human intervention [was] required to send each text message immediately or to select the time and date when, in the future, the text message will be sent.”  The District Court for the Middle District of Florida also employed this line of thinking in Gaza v. Auto Glass America, LLC, awarding summary judgment to the defendant because the text messaging system the company employed could not send messages randomly, but rather required a human agent to input the numbers to be contacted and designate the time at which the messages were to be sent.

In the case of Ramos v. Hopele of Fort Lauderdale, however, the District Court for the Southern District of Florida found a separate human action to be critical, focusing instead on the fact that “the program can only be used to send messages to specific identified numbers that have been inputted into the system by the customer.”  And another court in the Northern District of Illinois echoed this finding in Blow v. Bijora, Inc., determining that, because “every single phone number entered into the [text] messaging system was keyed via human involvement … [and because] the user must manually draft the message that the platform will sent” the text messaging platform did not meet the TCPA’s ATDS requirements.

Indeed, with the entire industry still awaiting a new ATDS definition from the FCC, there is still much confusion as to how the ATDS term will be interpreted and applied to both users of calling platforms and users of texting platforms.  Fortunately, though, there appears to be a trend developing for text message platforms, with multiple courts finding that human intervention is a crucial issue that can protect companies from TCPA liability.  Granted, these courts have not yet been able to agree on what human action actually removes the platform from the ATDS definition, and, as we’ve noted previously, even if human intervention remains the guiding standard, determining precisely what qualifies as sufficient intervention and when in the process of transmitting a message the relevant intervention must occur remains much more an art than a science.  However, the cases mentioned above are still useful in pointing marketers everywhere in the right direction and present guidelines for ensuring they send text messages in compliance with the TCPA.

 

Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.
Read more news on the TCPA Litigation on the National Law Review Communication type of law page.

Get a Head Start in 2019 – Leveraging Your Cyber Liability Insurance

As 2019 begins, companies should seriously consider the financial and reputational impacts of cyber incidents and invest in sufficient and appropriate cyber liability coverage. According to a recent published report, incidents of lost personal information (such as protected health information) are on the rise and are significantly costing companies. Although cyber liability insurance is not new, many companies lack sufficient coverage. RSM US LLP, NetDiligence 2018 Cyber Claims Study (2018).

According to the 2018 study, cyber claims are impacting companies of all sizes with revenues ranging from less than $50 million to more than $100 billion.  Further, the average total breach cost alone is $603.9K. This does not include crisis services cost (average $307K), the legal costs (defense = $106K; settlement = $224K; regulatory defense = $514K; regulatory fines = $18K), and the cost of business interruption (all costs = $2M; recovery expense = $957K).  In addition to these financial costs, reputational impact stemming from cyber incidents can materially set companies back for a long period of time after the incident.

Companies can reduce risk associated with cyber incidents by developing and implementing privacy and security policies, educating and training employees, and building strong security infrastructures.  Nevertheless, there is no such thing as 100% security, and thus companies should consider leveraging cyber liability insurance to offset residual risks.  With that said, cyber liability coverages vary across issuers and can contain many carve-outs and other complexities that can prevent or reduce coverage.  Therefore, stakeholders should review their cyber liability policies to ensure that they understand the terms and conditions of such policies. Key items to evaluate can include: coverage levels per claim and in the aggregate, retention amounts, notice requirements, exclusions, and whether liability arising from malicious third party conduct are sufficiently covered.

While cyber liability insurance will not practically reduce risk or a cyber incident, it is increasingly a critical component of a holistic risk mitigation strategy given the world we live in.

©2019 Epstein Becker & Green, P.C. All rights reserved.
This post was written by Alaap B. Shah and Daniel Kim from Epstein Becker & Green, P.C.

Now I Get It!: Using the FCC’s Order Keeping Text Messages as “Information Services” to Better Understand the Communications Act

Little known fact: the TCPA is just a tiny little part of something much bigger and more complex called the Communications Act of 1934, as amended by Telecom Act of 1996 (which the FCC loves to just call the “Communications Act.”) And yes, I know the TCPA was enacted in 1991 but trust me it is still part of the Communications Act of 1934.

The Communications Act divides communications services into two mutually exclusive types: highly regulated “telecommunications services” and lightly regulated “information services.”

So let’s look at some definitions:

A “telecommunications service” is a common carrier service that requires “the offering of telecommunications for a fee directly to the public, or to such classes of users as to be effectively available to the public, regardless of the facilities used.”

“Telecommunications” is “the transmission, between or among points specified by the end user, of information of the user’s choosing without change in the form or content of the information as sent and received.”

By contrast, an “information service” is “the offering of a capability for generating, acquiring, storing, transforming, processing, retrieving, utilizing, or making available information via telecommunications, and includes electronic publishing, but does not include any use of any such capability for the management, control, or operation of a telecommunications system or the management of a telecommunications service.”

Make sense so far? Basically a telecommunications service is something that telecommunications companies–who are common carriers– can’t tinker with and have to automatically connect without modifying. For instance, if I want to call my friends from law school and wish them well Verizon can’t say–wait a minute, Eric doesn’t have any friends from law school and refuse to connect the call. Verizon must just connect the call. It doesn’t matter who I am calling, how long the call will be, or why I’m making the call, the call must connect. The end.

Information services are totally different animals. Carriers can offer or not offer and tinker and manipulate such messages all they want–see also net neutrality.

So if text messages are a telecommunication then they must be connected without question. But if text messages are an information service then carriers can decide which messages get through and which don’t.

It might seem like you’d want text messages to be information services–after all why would we want the carriers determining how and when we can text each other? Well the FCC has an answer– automatic spam texts.

If text messages are subject to common carrier rules then people can blast your phone with spam text messages and the carriers can’t stop them. True the TCPA exists so you can sue the texter but–as we know–the vast majority of spammers are shady fly-by-nights or off-shore knuckleheads that you can’t find. So the FCC believes that keeping text messages categorized as “information services”–as they are currently defined–will keep spammers away from your SMS inbox. It issued a big order today accomplishing just that. 

And to be sure, the carriers are monitoring and block spam texts as we speak. As the FCC finds: “wireless messaging providers apply filtering to prevent large volumes of unwanted messaging traffic or to identify potentially harmful texts.”  The FCC credits these carrier efforts with keeping text messages relatively spam free:

For example, the spam rate for SMS is estimated at 2.8% whereas the spam rate for email is estimated at over 50%.  Wireless messaging is therefore a trusted and reliable form of communication for many Americans. Indeed, consumers open a far larger percentage of wireless messages than email and open such messages much more quickly.

So from a policy perspective keeping text messages as information services probably makes sense, but let’s review those definitions again.

A telecommunication service is essentially the transmission of information of the user’s choosing.

An information service is “the offering of a capability for generating, acquiring, storing, transforming, processing, retrieving, utilizing, or making available information via telecommunications.”

So is a text message the transmission of information of my choosing or is it the use of Verizon’s ability to store and retrieve information I am sending? (And is there really even a difference?)

Well the FCC says texts are absolutely information services and here’s why:

  • SMS and MMS wireless messaging services provide the capability for “storing”
    and “retrieving” information. When a user sends a message, the message is routed through servers on mobile networks. When a recipient device is unavailable to receive the message because it is turned off, the message will be stored at a messaging center in the provider’s network until the recipient device is able to receive it.

  • SMS and MMS wireless messaging services also involve the capability for “acquiring” and “utilizing” information. As CTIA explains, a wireless subscriber can “ask for and receive content, such as weather, sports, or stock information, from a third party that has stored that information on its servers. SMS subscribers can ‘pull’ this information from the servers by making specific requests, or they can signal their intent to have such information regularly ‘pushed’ to their mobile phone.

  • SMS and MMS wireless messaging services involve “transforming” and
    “processing” capabilities. Messaging providers, for example, may change the form of transmitted information by breaking it into smaller segments before delivery to the recipient in order to conform to the character limits of SMS.

Yeah…I guess. But realistically when I send a text I just want it to get there there the way I sent it. Maybe there’s some storing and utilizing and processing or whatever but not very much.

And that was Twilio’s point. It asserted:  “the only offering that wireless carriers make to the public, with respect to messaging, is the ability of consumers to send and receive messages of the consumers’ design and choosing.” That sounds right.

Well the FCC disagrees: “These arguments are unpersuasive.”

The FCC’s point is that “what matters are the capabilities offered by the service, and as we explain above, wireless messaging services feature storage, retrieval, and other information-processing capabilities.”

Hmmm. ok. I guess I’m ok with that if you are.

But let’s get to the good stuff from a TCPA perspective. Recall that a text message is a “call” for purposes of the TCPA. Well if a text isn’t even a telecommunication how can it be a call? Asks Twilio.

Yeah, FCC, how can it be a call? Asks the Czar.

The Commission answers:

the Commission’s decision merely clarified the meaning of the undefined term “call” in order to address the obligations that apply to telemarketers and other callers under the TCPA. That decision neither prohibits us from finding that wireless messaging service is an information service, nor compels us to conclude that messaging is a telecommunications service.

Ok. Well. Why not?

The Commission answers further:

The TCPA provision itself generally prohibits the use of a facsimile machine to send
unsolicited advertisements, but that does not constitute a determination that an individual’s sending of a fax is a telecommunications service, just as the application to an individual’s making “text calls” does not reflect a determination that wireless messaging is a telecommunications service. In any event, for purposes of regulatory treatment, there is a significant difference between being subject to Commission regulation and being subject to per se common carrier regulation. Only the latter requires classification as a telecommunications service. We clarify herein that SMS and MMS wireless messaging are Title I services, and thus, will not be subject to per se common carrier regulation.

Umm FCC, no disrespect intended, but I kind of feel like that doesn’t really answer the question.

But in any event, the FCC plainly believes that text messages are a “call” for purposes of the TCPA but are not a “telecommunication” for purposes of common carrier regulation.

From a policy perspective I’m fine with the conclusion the Commission reached–it makes sense to keep text messages free from spam. But we have to be honest with ourselves here, the Commission just did legal somersaults to get there. Maybe its time for Congress to take another look at the Communications Act hmmm?

In any event, now you get it!

 

Copyright © 2018 Womble Bond Dickinson (US) LLP All Rights Reserved.
This post was written by Eric Troutman of Womble Bond Dickinson (US) LLP.
Read more news about the TCPA at the National Law Review.

The Importance of Information Security Plans

In the first installation of our weekly series during National Cybersecurity Awareness Month, we examine information security plans (ISP) as part of an overall cybersecurity strategy.  Regardless of the size or function of an organization, having an ISP is a critical planning and risk management tool and, depending on the business, it may be required by law.  An ISP details the categories of data collected, the ways that data is processed or used, and the measures in place to protect it.  An ISP should address different categories of data maintained by the organization, including employee data and customer data as well as sensitive business information like trade secrets.

Having an ISP is beneficial for many reasons but there are two primary benefits.  First, once an organization identifies the data it owns and processes, it can more effectively assess risks and protect the data.  Second, in the event of a cyber attack or breach, an organization’s thorough understanding of the types of data it holds and the location of that data will expedite response efforts and reduce financial and reputational damage.

While it is a tedious task to determine the data that an organization collects and create a data inventory from that information, it is well worth the effort.  Once an organization assembles a data inventory, it can assess whether it needs all the data it collects before it invests time, effort and money into protecting it.  From a risk management perspective, it is always best to collect the least amount of information necessary to carry out business functions.  By eliminating unnecessary data, there is less information to protect and, therefore, less information at risk in the event of a cyber attack or breach.

Some state, federal and international laws require an ISP (or something like it).  For example, in Massachusetts, all businesses (regardless of location) that collect personal information of Massachusetts residents, which includes an organization’s own employees, “shall develop, implement, and maintain a comprehensive information security program that is written . . . and contains administrative, technical, and physical safeguards” based on the size, operations and sophistication of the organization.  The MA Office of Consumer Affairs and Business Regulation created a guide for small businesses to assist with compliance.

In Connecticut, while there is no requirement for an ISP, unless you contract with the state or are a health insurer, the state data breach law pertaining to electronically stored information offers a presumption of compliance when there is a breach if the organization timely notifies and reports under the statute and follows its own ISP.  Practically speaking, this means that the state Attorney General’s office is far less likely to launch an investigation into the breach.

On the federal level, by way of example, the Gramm Leach Bliley Act (GLBA) requires financial institutions to have an ISP and the Health Insurance Portability and Accountability Act (HIPAA) requires covered entities to perform a risk analysis, which includes an assessment of the types of data collected and how that data is maintained and protected.  Internationally, the EU General Data Privacy Regulation (GDPR), which took effect on May 25, 2018 and applies to many US-based organizations, requires a “record of processing activities.”  While this requirement is more extensive than the ISP requirements noted above, the concept is similar.

Here is a strategy for creating an ISP for your organization:

  1. Identify the departments that collect, store or process data.
  2. Ask each department to identify: (a) the categories of data they collect (e.g., business data and personal data such as name, email address, date of birth, social security number, credit card or financial account number, government ID number, etc.); (b) how and why they collect it; (c) how they use the data; (d) where it is stored; (e) format of the data (paper or electronic); and (f) who has access to it.
  3. Examine the above information and determine whether it needs to continue to be collected or maintained.
  4. Perform a security assessment, including physical and technological safeguards that are in place to protect the data.
  5. Devise additional measures, as necessary, to protect the information identified.  Such measures may include limiting electronic access to certain employees, file encryption, IT security solutions to protect the information from outside intruders or locked file cabinets for paper documents.  Training should always be an identified measure for protecting information and we will explore that topic thoroughly later this month.
© Copyright 2018 Murtha Cullina

“Hey Alexa – Tell Me About Your Security Measures”

California continues to lead the nation in cybersecurity and privacy legislation on the heels of the recent California Consumer Privacy Act of 2018 (“CCPA”).  Governor Brown recently signed into law two nearly identical bills, Assembly Bill No. 1906 and Senate Bill No. 327 (the “Legislation”) each of which required the signing of the other to become law, on September 28th, 2018.   Thus, California becomes the first country in the nation to regulate “connected devices” – the Internet of Things (IoT). The Legislation will go into effect January 2020.

  1. CA IoT Bills Apply to Manufacturers of Connected Devices

This Legislation applies to manufacturers of connected devices sold or offered for sale in California.  A connected device is defined as any device with an Internet Protocol (IP) or Bluetooth address, and capable of connecting directly or indirectly to the Internet.  Beyond examples such as cell phones and laptops, numerous household devices, from appliances such as refrigerators and washing machines, televisions, and children’s toys, could all meet the definition of connected device.

  1. What Must Manufacturers of Connected Devices Must Do

Manufacturers equip the connected device with reasonable security feature(s) that are “appropriate to the nature and function of the device, appropriate to the information it may collect, contain, or transmit, [and] designed to protect the device and any information contained therein from unauthorized access, destruction, use, modification, or disclosure.”

The Legislation provide some guidance as to what will be considered a reasonable security measure.  Devices that provide authentication with either a programmed password unique to the manufactured device, or provide a security feature that forces the user to generate a new means of authentication before access is granted will be deemed to have implemented a reasonable security feature.  The use of a generic, default password will not suffice.

Other than following this guidance, the Legislation does not provide specific methods of providing for reasonable security features.

  1. What Is Not Covered

a. Unaffiliated Third Party Software:  Many connected devices use multiple pieces of software to function.  The Legislation specifically states that “This title shall not be construed to impose any duty upon the manufacturer of a connected device related to unaffiliated third-party software or applications that a user chooses to add to a connected device.”

b. Companies That Provide Mechanisms To Sell Or Distribute Software: Application store owners, and others that provide a means of purchasing or downloading software or applications are not required to enforce compliance.

c. Devices or Functionality Already Regulated by Federal Authority: Connected Devices whose functionality is already covered by federal law, regulations or guidance of a federal agency need not comply.

d. Manufacturers Are Not Required To Lock Down Devices: Manufacturers are not required to prevent users from gaining full control of the device, including being able to load their own software at their own discretion.

  1. No Private Right of Action

No private right of action is provided, instead the “Attorney General, a city attorney, a county counsel, or a district attorney shall have the exclusive authority to enforce this title.”

  1. Not Limited To Personal Information

Previously, other California legislation had required data security measures be implemented.  For example, California’s overarching data security law (Cal. Civ. Code § 1798.71.5), requires reasonable data security measures to protect certain types of personal information.  This current approach is not tied to personal information, but rather applies to any connected device that meets the definition provided.

  1. Likely Consequences After The Legislation Comes Into Effect in January 2020

a. Impact Will Be National: Most all manufacturers will want to sell their devices in California  As such they will need to comply with this California Legislation, as unless they somehow segment which devices are offered for sale in the California market, they will have to effectively comply nationally.

b. While Physical Device Manufacturers Bear Initial Burden, Software Companies Will Be Affected: The Legislation applies to “any device, or other physical object that is capable of connecting to the Internet, directly or indirectly, and that is assigned an Internet Protocol address or Bluetooth address.”  While this puts the burden foremost on physical device manufacturers, software companies that provide software to device manufacturers for inclusion on the device before the device is offered for sale will need to support compliance with the Legislation.

c. Merger And Acquisition Events Will Serve As Private Enforcement Mechanisms: While there may not be a private right of action provided, whenever entities or portions of entities that are subject to the Legislation are bought and sold, the buyer will want to ensure compliance by the seller with the Legislation or otherwise ensure that the seller bears the risk or has compensated the buyer.  Effectively, this will mean that companies that want to be acquired will need to come into compliance or face a reduced sales price or a similar mechanism of risk shifting.

 

©1994-2018 Mintz, Levin, Cohn, Ferris, Glovsky and Popeo, P.C. All Rights Reserved.