Fake Followers; Real Problems

Fake followers and fake likes have spread throughout social media in recent years.  Social media platforms such as Facebook and Instagram have announced that they are cracking down on so-called “inauthentic activity,” but the practice remains prevalent.  For brands advertising on social media, paying for fake followers and likes is tempting—the perception of having a large audience offers a competitive edge by lending the brand additional legitimacy in the eyes of consumers, and the brand’s inflated perceived reach attracts higher profile influencers and celebrities for endorsement deals.  But the benefits come with significant legal risks.  By purchasing fake likes and followers, brands could face enforcement actions from government agencies and false advertising claims brought by competitors.

Groundbreaking AG Settlement: Selling Fake Engagement Is Illegal

On January 30, 2019, the New York Attorney General announced a settlement prohibiting Devumi LLC from selling fake followers and likes on social media platforms.  Attorney General Letitia James announced that the settlement marked “the first finding by a law enforcement agency that selling fake social media engagement and using stolen identities to engage in online activity is illegal.”[i] 

Devumi’s customers ranged from actors, musicians, athletes, and modeling agencies to businesspeople, politicians, commentators, and academics, according to the settlement.  Customers purchased Devumi’s services hoping to show the public that they or their products were more popular (and by implication, more legitimate) than they really were.  The AG said Devumi’s services “deceived and attempted to affect the decision-making of social media audiences, including: other platform users’ decisions about what content merits their own attention; consumers’ decisions about what to buy; advertisers’ decisions about whom to sponsor; and the decisions by policymakers, voters, and journalists about which people and policies have public support.”[ii]

Although the Devumi settlement did not impose a monetary punishment, it opened the doors for further action against similar services, and the AG warned that future perpetrators could face financial penalties.

Buyers Beware

Although the New York AG’s settlement with Devumi only addressed sellers of fake followers and likes, companies buying the fake engagement could also face enforcement actions from government agencies and regulatory authorities.  But the risk doesn’t end there—brands purchasing fake engagement could become targets of civil suits brought by competitors, where the potential financial exposure could be much greater.

Competing brands running legitimate social media marketing campaigns, and who are losing business to brands buying fake likes and followers, may be able to recover through claims brought under Lanham Act and/or state unfair competition laws, such as California’s Unfair Competition Law (“UCL”).[iii] 

The Lanham Act imposes liability upon “[a]ny person who, on or in connection with any goods or services, … uses in commerce any … false or misleading description of fact, or false or misleading representation of fact, which … is likely to … deceive as to the … sponsorship, or approval of his or her goods, services, or commercial activities by another person” or “in commercial advertising … misrepresents the nature, characteristics, qualities, or geographic origin of … goods, services, or commercial activities.”[iv]

Fake likes on social media posts could constitute false statements about the “approval of [the advertiser’s] goods, services, or commercial activities” under the Lanham Act.  Likewise, a fake follower count could misrepresent the nature or approval of “commercial activities,” deceiving the public into believing a brand is more popular among consumers than it is.

The FTC agrees that buying fake likes is unlawful.  It publishes guidelines to help the public understand whether certain activities could violate the FTC Act.  In the FAQ for the Endorsement Guides, the FTC states, “an advertiser buying fake ‘likes’ is very different from an advertiser offering incentives for ‘likes’ from actual consumers.  If ‘likes’ are from non-existent people or people who have no experience using the product or service, they are clearly deceptive, and both the purchaser and the seller of the fake ‘likes’ could face enforcement action.” (emphasis added).[v]  

Although there is no private right of action to enforce FTC Guidelines, the Guidelines may inform what constitutes false advertising under the Lanham Act.[vi]  Similarly, violations of the FTC Act (as described in FTC Guidelines) may form the basis of private claims under state consumer protection statutes, including California’s UCL.[vii]

While the Devumi settlement paved the way for private lawsuits against sellers of fake social media engagement, buyers need to be aware that they could face similar consequences.  Because of the risk of both government enforcement actions and civil lawsuits brought by competitors, brands should resist the temptation to artificially grow their social media footprint and instead focus on authentically gaining popularity.  Conversely, brands operating legitimately but losing business to competitors buying fake engagement should consider using the Lanham Act and state unfair competition laws as tools to keep the playing field more even.


[i]Attorney General James Announces Groundbreaking Settlement with Sellers of Fake Followers and “Likes” on Social Media, N.Y. Att’y Gen.

[ii] Id.

[iii] Cal. Bus. & Prof. Code § 17200, et seq.

[iv] 15 U.S.C. § 1125(a).

[v] The FTC’s Endorsement Guides: What People Are Asking, Fed. Trade Comm’n (Sept. 2017) .

[vi] See Grasshopper House, LLC v. Clean & Sober Media, LLC, No. 218CV00923SVWRAO, 2018 WL 6118440, at *6 (C.D. Cal. July 18, 2018) (“a ‘plaintiff may and should rely on FTC guidelines as a basis for asserting false advertising under the Lanham Act.’”) (quoting Manning Int’l IncvHome Shopping NetworkInc., 152 F. Supp. 2d 432, 437 (S.D.N.Y. 2001)).

[vii] See Rubenstein v. Neiman Marcus Grp. LLC, 687 F. App’x 564, 567 (9th Cir. 2017) (“[A]lthough the FTC Guides do not provide a private civil right of action, ‘[v]irtually any state, federal or local law can serve as the predicate for an action under [the UCL].’”) (quoting Davis v. HSBC Bank Nev., N.A., 691 F.3d 1152, 1168 (9th Cir. 2012)).

 

© 2019 Robert Freund Law.
This post was written by Robert S. Freund of Robert Freund Law.

Using Prior FCC Rulings and Focusing on Human Intervention, Court Finds Texting Platform Is Not An ATDS

In today’s world of ever-conflicting TCPA rulings, it is important to remember that, where courts are asked to determine the TCPA’s ATDS definition, their inquiry will revolve around the question of whether that definition includes only devices that actually generate random or sequential numbers or also devices with a broader range of functionalities.  However, it is also important to remember that, when courts are trying to determine whether a calling/text messaging system meets the ATDS definition, focusing on the level of human intervention used in making a call or sending a text message is a separate decisive inquiry that also must be made.

As we’ve previously mentioned, this latter inquiry is important in all types of TCPA cases, but recently the issue has been given special attention in cases regarding text messages and text messaging platforms.  Indeed, this happened again yesterday when the court in Duran v. La Boom Disco determined a nightclub’s use of text messaging did not violate the TCPA because of the level of human involvement exhibited by the nightclub in operating the software and scheduling the sending of messages.

Background

In Duran v. La Boom Disco, the United States District Court for the Eastern District of New York was tasked with analyzing the ExpressText and EZ Texting platforms, which are text messaging software platforms offered to businesses and franchises, whereby the business can write, program, and schedule text messages to be sent to a curated list of consumer mobile phone numbers.

At first glance, the facts in Duran appear to signal a slam dunk case for the plaintiff.  The defendant nightclub had used the ExpressText and EZ Texting platforms to send marketing text messages to the plaintiff after he replied to a call-to-action advertisement by texting the keyword “TROPICAL” to obtain free admission to the nightclub for a Saturday night event.  Importantly, though, after the plaintiff texted this keyword, he never received a second text messaging asking whether he consented to receive recurring automated text messages (commonly referred to as a “double opt-in” message).  He did, however, receive approximately 100 text messages advertising other events at the nightclub and encouraging him to buy tickets, which ultimately led him to bring a TCPA action against the club.

Accordingly, the initial issue that the Duran court was tasked with deciding was whether the defendant nightclub had texted the plaintiff without his prior express written consent.  The court quickly dispensed with it, determining that the nightclub had not properly obtained written consent from the plaintiff, as it had failed to use a double opt-in process to ensure the plaintiff explicitly agreed to receive recurring automated marketing text message and could not otherwise prove that the plaintiff explicitly consented to receiving recurring messages or a marketing nature (which, under the TCPA, the nightclub had the burden to prove).

At this stage, then, things were looking bad for the nightclub.  However, this was not the end of the court’s analysis, as the nightclub could only be liable for sending these non-consented-to messages if they had been sent using an ATDS.  Thus, the court turned to its second – and much more important – line of inquiry: whether the ExpressText and EZ Texting software, as used by the nightclub to text the plaintiff, qualified as an ATDS.

Defining the ATDS Term in the Aftermath of ACA International

In order to determine whether the ExpressText and EZ Texting platforms met the TCPA’s ATDS definition, the court performed an analysis that has become all too common since the FCC’s 2015 Declaratory Order was struck down in ACA International: determining what the appropriate definition of ATDS actually is.  With respect to this issue, the litigants took the same positions that we typically see advanced.  The plaintiff argued that the ExpressText and EZ Texting platforms were the equivalent of “predictive dialers” that could “dial numbers from a stored list,” which were included within the TCPA’s ATDS definition.  The Nightclub countered that predictive dialers and devices that dialed from a database fell outside of the ATDS definition, meaning the nightclub’s use of the ExpressText and EZ Texting platforms should not result in TCPA liability.

The court began the inquiry with what is now the all-too-familiar analysis of the extent to which the D.C. Circuit’s opinion in ACA International invalidated the FCC’s prior 2003 and 2008 predictive dialer rulings.  After examining the opinion, the court found that those prior rulings still remained intact because “the logic behind invalidating the 2015 Order does not apply to the prior FCC orders.”  The court then concluded that, because the 2003 and 2008 ATDS rulings remained valid, it could use the FCC’s 2003 and 2008 orders to define the ATDS term, and that, based on these rulings, the TCPA also prohibited defendants from sending automated text messages using predictive dialers and/or any dialing system that “dial numbers from a stored list.”

However, the fact that the ExpressText and EZ Texting platforms dialed numbers from a stored list did not end the inquiry since, under the 2003 and 2008 orders, “equipment can only meet the definition of an autodialer if it pulls from a list of numbers, [and] also has the capacity to dial those numbers without human intervention.”  And it was here where the plaintiff’s case fell apart, for while the ExpressText and EX Texting platforms dialed from stored lists and saved databases, these platforms could not dial the stored numbers without a human’s assistance.  As the court explained:

When the FCC expanded the definition of an autodialer to include predictive dialers, the FCC emphasized that ‘[t]he principal feature of predictive dialing software is a timing function.’  Thus, the human-intervention test turns not on whether the user must send each individual message, but rather on whether the user (not the software) determines the time at which the numbers are dialed….  There is no dispute that for the [ExpressText and EZ Texting] programs to function, ‘a human agent must determine the time to send the message, the content of the messages, and upload the numbers to be texted into the system.’

In sum, because a user determines the time at which the ExpressText and EZ Texting programs send messages to recipients, they operate with too much human involvement to meet the definition of an autodialer.

Human Intervention Saves the Day (Again)

In Duran, the district court made multiple findings that would ordinarily signal doom for a defendant: it broadly defined the ATDS term to include predictive dialers and devices that dialed numbers from a stored list/database and it found the nightclub’s text messages to have been sent without appropriately obtaining the plaintiff’s express written consent.  However, despite these holdings, the nightclub was still able to come out victorious because of the district court’s inquiry into the human intervention issue and because the ExpressText and EZ Texting platforms the nightclub used required just enough human involvement to move the systems into a zone of protection.  In many ways, this holding – and the analysis employed – is unique; however, with respect to the focus on the human intervention requirement, the district court’s decision can be seen as another step down a path that has been favorable to web-based text messaging platforms.

Indeed, over the course of the last two years, several courts have made it a point to note that the human intervention analysis is a separate, but equally important, determination that the court must analyze before concluding that a device is or is not an ATDS.  With respect to the text-messaging line of cases, this has especially been the case, with numerous courts noting that, no matter whether the ATDS definition is or is not limited to devices that randomly or sequentially generate numbers, the numbers must also be dialed without human intervention.  What is interesting, though, is that the courts that have interpreted this line of cases have focused on different actions as being the key source of human intervention.

As we already discussed, the court in Duran noted that the key inflection point for determining whether human intervention exists is based off of the timing of the message and whether a human or the device itself gets to determine when the text message is sent out.  And in Jenkins v. mGage, LLC, the District Court for the Northern District of Georgia reached a similar conclusion, finding that the defendant’s use of a text messaging platform involved enough human intervention to bring the device outside of the ATDS definition because “direct human intervention [was] required to send each text message immediately or to select the time and date when, in the future, the text message will be sent.”  The District Court for the Middle District of Florida also employed this line of thinking in Gaza v. Auto Glass America, LLC, awarding summary judgment to the defendant because the text messaging system the company employed could not send messages randomly, but rather required a human agent to input the numbers to be contacted and designate the time at which the messages were to be sent.

In the case of Ramos v. Hopele of Fort Lauderdale, however, the District Court for the Southern District of Florida found a separate human action to be critical, focusing instead on the fact that “the program can only be used to send messages to specific identified numbers that have been inputted into the system by the customer.”  And another court in the Northern District of Illinois echoed this finding in Blow v. Bijora, Inc., determining that, because “every single phone number entered into the [text] messaging system was keyed via human involvement … [and because] the user must manually draft the message that the platform will sent” the text messaging platform did not meet the TCPA’s ATDS requirements.

Indeed, with the entire industry still awaiting a new ATDS definition from the FCC, there is still much confusion as to how the ATDS term will be interpreted and applied to both users of calling platforms and users of texting platforms.  Fortunately, though, there appears to be a trend developing for text message platforms, with multiple courts finding that human intervention is a crucial issue that can protect companies from TCPA liability.  Granted, these courts have not yet been able to agree on what human action actually removes the platform from the ATDS definition, and, as we’ve noted previously, even if human intervention remains the guiding standard, determining precisely what qualifies as sufficient intervention and when in the process of transmitting a message the relevant intervention must occur remains much more an art than a science.  However, the cases mentioned above are still useful in pointing marketers everywhere in the right direction and present guidelines for ensuring they send text messages in compliance with the TCPA.

 

Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.
Read more news on the TCPA Litigation on the National Law Review Communication type of law page.

Get a Head Start in 2019 – Leveraging Your Cyber Liability Insurance

As 2019 begins, companies should seriously consider the financial and reputational impacts of cyber incidents and invest in sufficient and appropriate cyber liability coverage. According to a recent published report, incidents of lost personal information (such as protected health information) are on the rise and are significantly costing companies. Although cyber liability insurance is not new, many companies lack sufficient coverage. RSM US LLP, NetDiligence 2018 Cyber Claims Study (2018).

According to the 2018 study, cyber claims are impacting companies of all sizes with revenues ranging from less than $50 million to more than $100 billion.  Further, the average total breach cost alone is $603.9K. This does not include crisis services cost (average $307K), the legal costs (defense = $106K; settlement = $224K; regulatory defense = $514K; regulatory fines = $18K), and the cost of business interruption (all costs = $2M; recovery expense = $957K).  In addition to these financial costs, reputational impact stemming from cyber incidents can materially set companies back for a long period of time after the incident.

Companies can reduce risk associated with cyber incidents by developing and implementing privacy and security policies, educating and training employees, and building strong security infrastructures.  Nevertheless, there is no such thing as 100% security, and thus companies should consider leveraging cyber liability insurance to offset residual risks.  With that said, cyber liability coverages vary across issuers and can contain many carve-outs and other complexities that can prevent or reduce coverage.  Therefore, stakeholders should review their cyber liability policies to ensure that they understand the terms and conditions of such policies. Key items to evaluate can include: coverage levels per claim and in the aggregate, retention amounts, notice requirements, exclusions, and whether liability arising from malicious third party conduct are sufficiently covered.

While cyber liability insurance will not practically reduce risk or a cyber incident, it is increasingly a critical component of a holistic risk mitigation strategy given the world we live in.

©2019 Epstein Becker & Green, P.C. All rights reserved.
This post was written by Alaap B. Shah and Daniel Kim from Epstein Becker & Green, P.C.

Now I Get It!: Using the FCC’s Order Keeping Text Messages as “Information Services” to Better Understand the Communications Act

Little known fact: the TCPA is just a tiny little part of something much bigger and more complex called the Communications Act of 1934, as amended by Telecom Act of 1996 (which the FCC loves to just call the “Communications Act.”) And yes, I know the TCPA was enacted in 1991 but trust me it is still part of the Communications Act of 1934.

The Communications Act divides communications services into two mutually exclusive types: highly regulated “telecommunications services” and lightly regulated “information services.”

So let’s look at some definitions:

A “telecommunications service” is a common carrier service that requires “the offering of telecommunications for a fee directly to the public, or to such classes of users as to be effectively available to the public, regardless of the facilities used.”

“Telecommunications” is “the transmission, between or among points specified by the end user, of information of the user’s choosing without change in the form or content of the information as sent and received.”

By contrast, an “information service” is “the offering of a capability for generating, acquiring, storing, transforming, processing, retrieving, utilizing, or making available information via telecommunications, and includes electronic publishing, but does not include any use of any such capability for the management, control, or operation of a telecommunications system or the management of a telecommunications service.”

Make sense so far? Basically a telecommunications service is something that telecommunications companies–who are common carriers– can’t tinker with and have to automatically connect without modifying. For instance, if I want to call my friends from law school and wish them well Verizon can’t say–wait a minute, Eric doesn’t have any friends from law school and refuse to connect the call. Verizon must just connect the call. It doesn’t matter who I am calling, how long the call will be, or why I’m making the call, the call must connect. The end.

Information services are totally different animals. Carriers can offer or not offer and tinker and manipulate such messages all they want–see also net neutrality.

So if text messages are a telecommunication then they must be connected without question. But if text messages are an information service then carriers can decide which messages get through and which don’t.

It might seem like you’d want text messages to be information services–after all why would we want the carriers determining how and when we can text each other? Well the FCC has an answer– automatic spam texts.

If text messages are subject to common carrier rules then people can blast your phone with spam text messages and the carriers can’t stop them. True the TCPA exists so you can sue the texter but–as we know–the vast majority of spammers are shady fly-by-nights or off-shore knuckleheads that you can’t find. So the FCC believes that keeping text messages categorized as “information services”–as they are currently defined–will keep spammers away from your SMS inbox. It issued a big order today accomplishing just that. 

And to be sure, the carriers are monitoring and block spam texts as we speak. As the FCC finds: “wireless messaging providers apply filtering to prevent large volumes of unwanted messaging traffic or to identify potentially harmful texts.”  The FCC credits these carrier efforts with keeping text messages relatively spam free:

For example, the spam rate for SMS is estimated at 2.8% whereas the spam rate for email is estimated at over 50%.  Wireless messaging is therefore a trusted and reliable form of communication for many Americans. Indeed, consumers open a far larger percentage of wireless messages than email and open such messages much more quickly.

So from a policy perspective keeping text messages as information services probably makes sense, but let’s review those definitions again.

A telecommunication service is essentially the transmission of information of the user’s choosing.

An information service is “the offering of a capability for generating, acquiring, storing, transforming, processing, retrieving, utilizing, or making available information via telecommunications.”

So is a text message the transmission of information of my choosing or is it the use of Verizon’s ability to store and retrieve information I am sending? (And is there really even a difference?)

Well the FCC says texts are absolutely information services and here’s why:

  • SMS and MMS wireless messaging services provide the capability for “storing”
    and “retrieving” information. When a user sends a message, the message is routed through servers on mobile networks. When a recipient device is unavailable to receive the message because it is turned off, the message will be stored at a messaging center in the provider’s network until the recipient device is able to receive it.

  • SMS and MMS wireless messaging services also involve the capability for “acquiring” and “utilizing” information. As CTIA explains, a wireless subscriber can “ask for and receive content, such as weather, sports, or stock information, from a third party that has stored that information on its servers. SMS subscribers can ‘pull’ this information from the servers by making specific requests, or they can signal their intent to have such information regularly ‘pushed’ to their mobile phone.

  • SMS and MMS wireless messaging services involve “transforming” and
    “processing” capabilities. Messaging providers, for example, may change the form of transmitted information by breaking it into smaller segments before delivery to the recipient in order to conform to the character limits of SMS.

Yeah…I guess. But realistically when I send a text I just want it to get there there the way I sent it. Maybe there’s some storing and utilizing and processing or whatever but not very much.

And that was Twilio’s point. It asserted:  “the only offering that wireless carriers make to the public, with respect to messaging, is the ability of consumers to send and receive messages of the consumers’ design and choosing.” That sounds right.

Well the FCC disagrees: “These arguments are unpersuasive.”

The FCC’s point is that “what matters are the capabilities offered by the service, and as we explain above, wireless messaging services feature storage, retrieval, and other information-processing capabilities.”

Hmmm. ok. I guess I’m ok with that if you are.

But let’s get to the good stuff from a TCPA perspective. Recall that a text message is a “call” for purposes of the TCPA. Well if a text isn’t even a telecommunication how can it be a call? Asks Twilio.

Yeah, FCC, how can it be a call? Asks the Czar.

The Commission answers:

the Commission’s decision merely clarified the meaning of the undefined term “call” in order to address the obligations that apply to telemarketers and other callers under the TCPA. That decision neither prohibits us from finding that wireless messaging service is an information service, nor compels us to conclude that messaging is a telecommunications service.

Ok. Well. Why not?

The Commission answers further:

The TCPA provision itself generally prohibits the use of a facsimile machine to send
unsolicited advertisements, but that does not constitute a determination that an individual’s sending of a fax is a telecommunications service, just as the application to an individual’s making “text calls” does not reflect a determination that wireless messaging is a telecommunications service. In any event, for purposes of regulatory treatment, there is a significant difference between being subject to Commission regulation and being subject to per se common carrier regulation. Only the latter requires classification as a telecommunications service. We clarify herein that SMS and MMS wireless messaging are Title I services, and thus, will not be subject to per se common carrier regulation.

Umm FCC, no disrespect intended, but I kind of feel like that doesn’t really answer the question.

But in any event, the FCC plainly believes that text messages are a “call” for purposes of the TCPA but are not a “telecommunication” for purposes of common carrier regulation.

From a policy perspective I’m fine with the conclusion the Commission reached–it makes sense to keep text messages free from spam. But we have to be honest with ourselves here, the Commission just did legal somersaults to get there. Maybe its time for Congress to take another look at the Communications Act hmmm?

In any event, now you get it!

 

Copyright © 2018 Womble Bond Dickinson (US) LLP All Rights Reserved.
This post was written by Eric Troutman of Womble Bond Dickinson (US) LLP.
Read more news about the TCPA at the National Law Review.

The Importance of Information Security Plans

In the first installation of our weekly series during National Cybersecurity Awareness Month, we examine information security plans (ISP) as part of an overall cybersecurity strategy.  Regardless of the size or function of an organization, having an ISP is a critical planning and risk management tool and, depending on the business, it may be required by law.  An ISP details the categories of data collected, the ways that data is processed or used, and the measures in place to protect it.  An ISP should address different categories of data maintained by the organization, including employee data and customer data as well as sensitive business information like trade secrets.

Having an ISP is beneficial for many reasons but there are two primary benefits.  First, once an organization identifies the data it owns and processes, it can more effectively assess risks and protect the data.  Second, in the event of a cyber attack or breach, an organization’s thorough understanding of the types of data it holds and the location of that data will expedite response efforts and reduce financial and reputational damage.

While it is a tedious task to determine the data that an organization collects and create a data inventory from that information, it is well worth the effort.  Once an organization assembles a data inventory, it can assess whether it needs all the data it collects before it invests time, effort and money into protecting it.  From a risk management perspective, it is always best to collect the least amount of information necessary to carry out business functions.  By eliminating unnecessary data, there is less information to protect and, therefore, less information at risk in the event of a cyber attack or breach.

Some state, federal and international laws require an ISP (or something like it).  For example, in Massachusetts, all businesses (regardless of location) that collect personal information of Massachusetts residents, which includes an organization’s own employees, “shall develop, implement, and maintain a comprehensive information security program that is written . . . and contains administrative, technical, and physical safeguards” based on the size, operations and sophistication of the organization.  The MA Office of Consumer Affairs and Business Regulation created a guide for small businesses to assist with compliance.

In Connecticut, while there is no requirement for an ISP, unless you contract with the state or are a health insurer, the state data breach law pertaining to electronically stored information offers a presumption of compliance when there is a breach if the organization timely notifies and reports under the statute and follows its own ISP.  Practically speaking, this means that the state Attorney General’s office is far less likely to launch an investigation into the breach.

On the federal level, by way of example, the Gramm Leach Bliley Act (GLBA) requires financial institutions to have an ISP and the Health Insurance Portability and Accountability Act (HIPAA) requires covered entities to perform a risk analysis, which includes an assessment of the types of data collected and how that data is maintained and protected.  Internationally, the EU General Data Privacy Regulation (GDPR), which took effect on May 25, 2018 and applies to many US-based organizations, requires a “record of processing activities.”  While this requirement is more extensive than the ISP requirements noted above, the concept is similar.

Here is a strategy for creating an ISP for your organization:

  1. Identify the departments that collect, store or process data.
  2. Ask each department to identify: (a) the categories of data they collect (e.g., business data and personal data such as name, email address, date of birth, social security number, credit card or financial account number, government ID number, etc.); (b) how and why they collect it; (c) how they use the data; (d) where it is stored; (e) format of the data (paper or electronic); and (f) who has access to it.
  3. Examine the above information and determine whether it needs to continue to be collected or maintained.
  4. Perform a security assessment, including physical and technological safeguards that are in place to protect the data.
  5. Devise additional measures, as necessary, to protect the information identified.  Such measures may include limiting electronic access to certain employees, file encryption, IT security solutions to protect the information from outside intruders or locked file cabinets for paper documents.  Training should always be an identified measure for protecting information and we will explore that topic thoroughly later this month.
© Copyright 2018 Murtha Cullina

“Hey Alexa – Tell Me About Your Security Measures”

California continues to lead the nation in cybersecurity and privacy legislation on the heels of the recent California Consumer Privacy Act of 2018 (“CCPA”).  Governor Brown recently signed into law two nearly identical bills, Assembly Bill No. 1906 and Senate Bill No. 327 (the “Legislation”) each of which required the signing of the other to become law, on September 28th, 2018.   Thus, California becomes the first country in the nation to regulate “connected devices” – the Internet of Things (IoT). The Legislation will go into effect January 2020.

  1. CA IoT Bills Apply to Manufacturers of Connected Devices

This Legislation applies to manufacturers of connected devices sold or offered for sale in California.  A connected device is defined as any device with an Internet Protocol (IP) or Bluetooth address, and capable of connecting directly or indirectly to the Internet.  Beyond examples such as cell phones and laptops, numerous household devices, from appliances such as refrigerators and washing machines, televisions, and children’s toys, could all meet the definition of connected device.

  1. What Must Manufacturers of Connected Devices Must Do

Manufacturers equip the connected device with reasonable security feature(s) that are “appropriate to the nature and function of the device, appropriate to the information it may collect, contain, or transmit, [and] designed to protect the device and any information contained therein from unauthorized access, destruction, use, modification, or disclosure.”

The Legislation provide some guidance as to what will be considered a reasonable security measure.  Devices that provide authentication with either a programmed password unique to the manufactured device, or provide a security feature that forces the user to generate a new means of authentication before access is granted will be deemed to have implemented a reasonable security feature.  The use of a generic, default password will not suffice.

Other than following this guidance, the Legislation does not provide specific methods of providing for reasonable security features.

  1. What Is Not Covered

a. Unaffiliated Third Party Software:  Many connected devices use multiple pieces of software to function.  The Legislation specifically states that “This title shall not be construed to impose any duty upon the manufacturer of a connected device related to unaffiliated third-party software or applications that a user chooses to add to a connected device.”

b. Companies That Provide Mechanisms To Sell Or Distribute Software: Application store owners, and others that provide a means of purchasing or downloading software or applications are not required to enforce compliance.

c. Devices or Functionality Already Regulated by Federal Authority: Connected Devices whose functionality is already covered by federal law, regulations or guidance of a federal agency need not comply.

d. Manufacturers Are Not Required To Lock Down Devices: Manufacturers are not required to prevent users from gaining full control of the device, including being able to load their own software at their own discretion.

  1. No Private Right of Action

No private right of action is provided, instead the “Attorney General, a city attorney, a county counsel, or a district attorney shall have the exclusive authority to enforce this title.”

  1. Not Limited To Personal Information

Previously, other California legislation had required data security measures be implemented.  For example, California’s overarching data security law (Cal. Civ. Code § 1798.71.5), requires reasonable data security measures to protect certain types of personal information.  This current approach is not tied to personal information, but rather applies to any connected device that meets the definition provided.

  1. Likely Consequences After The Legislation Comes Into Effect in January 2020

a. Impact Will Be National: Most all manufacturers will want to sell their devices in California  As such they will need to comply with this California Legislation, as unless they somehow segment which devices are offered for sale in the California market, they will have to effectively comply nationally.

b. While Physical Device Manufacturers Bear Initial Burden, Software Companies Will Be Affected: The Legislation applies to “any device, or other physical object that is capable of connecting to the Internet, directly or indirectly, and that is assigned an Internet Protocol address or Bluetooth address.”  While this puts the burden foremost on physical device manufacturers, software companies that provide software to device manufacturers for inclusion on the device before the device is offered for sale will need to support compliance with the Legislation.

c. Merger And Acquisition Events Will Serve As Private Enforcement Mechanisms: While there may not be a private right of action provided, whenever entities or portions of entities that are subject to the Legislation are bought and sold, the buyer will want to ensure compliance by the seller with the Legislation or otherwise ensure that the seller bears the risk or has compensated the buyer.  Effectively, this will mean that companies that want to be acquired will need to come into compliance or face a reduced sales price or a similar mechanism of risk shifting.

 

©1994-2018 Mintz, Levin, Cohn, Ferris, Glovsky and Popeo, P.C. All Rights Reserved.

TCPA Consent Medley: Third New Decision Enforcing TCPA Consent Provision in Consumer Agreement Has “Robocallers” Humming

After a long period of quiet on the issue, TCPAland has seen three swift decisions on good-Reyes (Reyes v. Lincoln Auto. Fin. Servs., 861 F.3d 51 (2d Cir. 2017), as amended (Aug. 21, 2017)) all aligning to enforce contractual TCPA consent provisions. First, Navient scored a big win, but that was within the Second Circuit so it didn’t make much of a stir. But then a real breakthrough: the Chief Judge of the Northern District of Alabama held that TCPA consent provisions in consumer agreements could not be revoked–the first such ruling from within the Eleventh Circuit. And now the trifecta. A court within the Middle District of Florida–seemingly the most consumer-friendly TCPA jurisdiction in the country as of late– granted summary judgment on a TCPA claim to a Defendant today holding that a consumer cannot stop robocalls after agreeing to receive such calls as a term in a written contract.

Woah.

The case is Medley v. Dish Network, Case No. 8:16-cv-2534-T-36TBM, 2018 U.S. Dist. LEXIS 144895 (M.D. Fl. Aug. 27, 2018) and it represents the first decision out of the Middle District of Florida to apply Good Reyes and hold that TCPA consent is irrevocable in certain circumstances. As shown below, Medley took no prisoners in distinguishing and declining to follow decisions that had held otherwise.

After first determining that the contractual consent provisions survived Plaintiff’s bankruptcy discharge because Medley failed to include her debt to Dish on its schedules, the Court deftly articulated the governing rule of Good Reyes as follows:

“Although voluntary and gratuitous consent could be revoked under the common law, which was recognized by the Eleventh Circuit in Osorio, the Second Circuit explained that consent could ‘become irrevocable when it is provided in a legally binding agreement, in which case any attempted termination is not effective.’”

Medley at *29

The Medley court next tips its hat to the decision in Fewciting the Northern District of Alabama decision for the proposition that where a “plaintiff g[i]ve[s] consent to be called ‘as part of a bargained-for exchange and not merely gratuitously, she was unable to unilaterally revoke that consent’” (Medley at *30) before remarking simply: “This Court agrees.” Id. 

The Court goes on to find that “it is black-letter contract law that one party to an agreement cannot, without the other party’s consent, unilaterally modify the agreement once it has been executed” and “[n]othing in the TCPA indicates that contractually-granted consent can be unilaterally revoked in contradiction to black-letter law.” Medley at *30. How sweet is that?

The Medley court also distinguished Gager v. Dell Financial Services, LLC, 727
F.3d 265, 270-71 (3d Cir. 2013), Target National Bank v. Welch, No. 8:15-cv-614-T-36, 2016 WL 1157043 (M.D. Fla. Mar. 24, 2016) and Patterson v. AllyFinancial, Inc., No. 3:16-cv1592-
J-32-JBT, 2018 WL 647438 (M.D. Fla. Jan. 31, 2018) as cases involving application consents and opposed to contractual consent provisions. Medley also noted that the consent clause in Patterson did not apply to the type of calls being made in that case, a rather solid basis to distinguish and decline to follow the decision.

The Court also takes issue with the reasoning in Ammons v. Ally Financial, Inc.,
No. 3:17-cv-00505, 2018 WL 3134619 (M.D. Tenn. June 27, 2018)–refusing to apply Good Reyes despite contractual consent terms in an automotive finance agreement–and declines to follow it. In Medley’s view Ammons over reads Osorio and under analyzes Patterson and Welch. 

Accordingly the court concludes that Defendant is entitled to summary judgment and sums up matters succinctly in this clean-as-a-whistle conclusion:

“[T]he Court finds that in the absence of a statement by Congress that the TCPA alters the common-law notion that consent cannot be unilaterally revoked where given as part of a bargained for contract, the Court will decline to do so.”

Medley at *36.

Notably, as was the case in Harris, the contract in Medley did not include a revocation provision and was simply silent on the issue of whether consent could be revoked. As in Harris the Medley court–correctly–interpreted that silence to mean that consent could not be revoked at all.

Since many will ask, Medley was decided by the Hon. Charlene Honeywell who is no stranger to TCPA claimants appearing before her. With Medley she as certainly made her TCPAland mark.

And with Few and Medley working in their favor Defendants seeking to enforce contractual TCPA consent provisions suddenly have a lot to be optimistic about. But this is TCPAland and, in the words of the Grand Duchess, its best to never get too comfortable.

Copyright © 2018 Womble Bond Dickinson (US) LLP All Rights Reserved.

ARTICLE BY

Treasury Releases Report on Nonbank Institutions, Fintech, and Innovation

On July 31, 2018, the U.S. Department of the Treasury released a reportidentifying numerous recommendations intended to promote constructive activities by nonbank financial institutions, embrace financial technology (“fintech”), and encourage innovation.

This is the fourth and final report issued by Treasury pursuant to Executive Order 13772, which established certain Core Principles designed to inform the manner in which the Trump Administration regulates the U.S. financial system.  Among other things, the Core Principles include:  (i) empower Americans to make independent financial decisions and informed choices; (ii) prevent taxpayer-funded bailouts; (iii) foster economic growth and vibrant financial markets through more rigorous regulatory impact analysis; (iv) make regulation efficient, effective, and appropriately tailored; and (v) restore public accountability within federal financial regulatory agencies and rationalize the federal financial regulatory framework.

Treasury’s lengthy report contains over 80 recommendations, which are summarized in an appendix to the report.  The recommendations generally fall into four categories:  (i) adapting regulatory approaches to promote the efficient and responsible aggregation, sharing, and use of consumer financial data and the development of key competitive technologies; (ii) aligning the regulatory environment to combat unnecessary regulatory fragmentation and account for new fintech business models; (iii) updating a range of activity-specific regulations to accommodate technological advances and products and services offered by nonbank firms; and (iv) facilitating experimentation in the financial sector.

Some notable recommendations include:

Embracing Digitization, Data, and Technology

  • TCPA Revisions: Recommending that Congress and the Federal Communications Commission amend or provide guidance on the Telephone Consumer Protection Act to address unwanted calls and revocation of consent.

  • Consumer Access to Financial Data: Recommending that the Bureau of Consumer Financial Protection (“BCFP”) develop best practices or principles-based rules to promote consumer access to financial data through data aggregators and other third parties.

  • Data Aggregation: Recommending that various agencies eliminate legal and regulatory uncertainties so that data aggregators can move away from screen scraping to more secure and efficient methods of access.

  • Data Security and Breach Notification:  Recommending that Congress enact a federal data security and breach notification law to protect consumer financial data and notify consumers of a breach in a timely manner, with uniform national standards that preempt state laws.

  • Digital Legal Identity:  Recommending efforts by financial regulators and the Office of Management and Budget to enhance public-private partnerships that facilitate the adoption of trustworthy digital legal identity products and services and support full implementation of a U.S. government federated digital identity system.

  • Cloud Technologies, Artificial Intelligence, and Financial Services:  Recommending that regulators modernize regulations and guidance to avoid imposing obstacles on the use of cloud computing, artificial intelligence, and machine learning technologies in financial services, and to provide greater regulatory clarity that would enable further testing and responsible deployment of these technologies by financial services firms as these technologies evolve.

Aligning the Regulatory Framework to Promote Innovation

  • Harmonization of State Licensing Laws:  Encouraging efforts by state regulators to develop a more unified licensing regime, particularly for money transmission and lending, and to coordinate supervisory processes across the states, and recommending Congressional action if meaningful harmonization is not achieved within three years.

  • OCC Fintech Charter:  Recommending that the Office of the Comptroller of the Currency move forward with a special purpose national bank charter for fintech companies.

  • Bank-Nonbank Partnerships:  Recommending banking regulators tailor and clarify regulatory guidance regarding bank partnerships with nonbank firms.

Updating Activity-Specific Regulations

  • Codification of “Valid When Made” and True Lender Doctrines:  Recommending that Congress codify the “valid when made” doctrine and the legal status of a bank as the “true lender” of loans it originates but then places with a nonbank partner, and that federal banking regulators use their authorities to affirm these doctrines.

  • Encouraging Small-Dollar Lending:  Recommending that the BCFP rescind its Small-Dollar Lending Rule and that federal and state financial regulators encourage sustainable and responsible short-term, small-dollar installment lending by banks.

  • Adoption of Debt Collection Rules:  Recommending that the BCFP promulgate regulations under the Fair Debt Collection Practices Act to establish federal standards governing third-party debt collection, including standards that address the reasonable use of digital communications in debt collection activities.

  • Promote Experimentation with New Credit Models and Data:  Recommending that regulators support and provide clarity to enable the testing and experimentation of newer credit models and data sources by banks and nonbank financial firms.

  • Regulation of Credit Bureaus:  Recommending that the Federal Trade Commission and other relevant regulators take necessary actions to protect consumer data held by credit reporting agencies and that Congress assess whether further authority is needed in this area.

  • Regulation of Payments:  Recommending that the Federal Reserve act to facilitate a faster payments system, as well as changes to the BCFP’s remittance transfer rule.

Enabling the Policy Environment

  • Regulatory Sandboxes:  Recommending that federal and state regulators design a unified system to provide expedited regulatory relief and permit meaningful experimentation for innovative financial products, services, and processes, essentially creating a “regulatory sandbox.”

  • Technology Research Projects:  Recommending that Congress authorize financial regulators to undertake research and development and proof-of-concept technology partnerships with the private sector.

  • Cybersecurity and Operational Risks:  Recommending that financial regulators consider cybersecurity and other operational risks as new technologies are implemented, firms become increasingly interconnected, and consumer data are shared among a growing number of third parties.

© 2018 Covington & Burling LLP

Can I Secure a Loan with Bitcoin? Part I

Each day seems to bring another story about Bitcoin, Ethereum, Litecoin, or another virtual currency. If virtual currencies continue to grow in popularity, it’s only a matter of time before borrowers offer to pledge virtual currency as collateral for loans.  This article does not advise lenders on whether they should secure loans with virtual currency, but instead it focuses on whether a lender can use the familiar tools of Article 9 of the Uniform Commercial Code (“UCC”) to create and perfect a security interest in bitcoin.  (In this article, “bitcoin” is used as a generic term for all virtual currencies.)

Article 9 Basics

Article 9 allows a creditor to create a security interest in personal property. The owner of the property grants the creditor a security interest through a written security agreement. The security agreement creates the security interest between the secured party and the debtor. The secured party must then “perfect” the security interest to obtain lien priority over third parties and to protect its secured status should the debtor file bankruptcy.

Security interests are perfected in different ways depending on the type of collateral. Article 9 divides personal property into different categories, such as goods, equipment, inventory, accounts, money, and intangibles. The primary ways to perfect a security interest are (1) filing, with the appropriate filing agency, a UCC-1 financing statement containing a sufficient description of the collateral, (2) possession, or (3) control.

Bitcoin and Blockchain

Virtual currencies are electronic representations of value that may not have an equivalent value in a real government-backed currency. They can be used as a payment system, or digital currency, without an intermediary like a bank or credit card company. While virtual currencies can function like real currencies in certain transactions, and certain virtual currencies can be exchanged into real currencies, a virtual currency itself does not have legal tender status. Virtual currency is virtual—there is no bitcoin equivalent to a quarter or dollar bill.

Bitcoin operates on a protocol that uses distributed-ledger technology. This technology is called the blockchain. The blockchain eliminates the need for intermediaries such as banks. Unlike a dollar, which is interchangeable, each bitcoin is unique. The blockchain records all bitcoin transactions to prevent someone from re-spending the same bitcoin over and over.

Suppose you wanted to transfer cash to a friend. You could transfer funds from your bank account to her bank account. The banks act as intermediaries. Suppose you wanted to transfer cash to that same friend without a middle man. The only way to do that is meet her and hand over the cash. This exchange many not be practical for many reasons. You might live far from each other. Even if you’re near each other, you might not want to travel around town with a briefcase full of cash. Bitcoin and blockchain technology allow the transfer of cash directly and digitally without a middle man.

The blockchain is both transparent and opaque. It is transparent as to the ownership chain of every bitcoin.  In this way, it is easier to “trace” a bitcoin than to trace cash.  But the blockchain presently does not show liens on bitcoin.  So a secured party can confirm if a borrower owns bitcoin, but not if the borrower or a previous owner encumbered the bitcoin.

Is Bitcoin Money?

At first glance, bitcoin would seem to fall in the category of “money.” Article 9 defines money as a medium of exchange authorized or adopted by a domestic or foreign government. No government has adopted bitcoin as a medium of exchange. Dollars, euros, and pounds meet the definition of money—bitcoin does not. Therefore, bitcoin does not meet the definition of money. And a secured party perfects its security interest in money by physical possession, but because bitcoin is virtual, physical possession is impossible.

Is Bitcoin a Deposit Account?

A deposit account is a demand, time, savings, passbook, or similar account maintained with a bank. With a traditional deposit account, the secured party perfects its security interest by having “control” over that account. This is usually accomplished when the debtor, the debtor’s bank, and the secured party execute a deposit account control agreement. If the debtor defaults, the secured party can direct the debtor’s bank to transfer the funds from the account.

Bitcoin often is stored in a digital wallet with an exchange like Coinbase. The wallet is access-restricted by private keys or passwords, but that wallet is not a deposit account. The bitcoin itself is held by its owner on the blockchain, which is decentralized. Unlike a deposit account, there is no intermediary like a bank. With no intermediary, there is no way to establish “control” over the bitcoin. Consequently, bitcoin does not meet the definition of a deposit account.

Bitcoin is (Probably) a General Intangible

By process of elimination, bitcoin should be treated as a general intangible. A general intangible is personal property that does not fall into any other Article 9 category. A lender perfects a security interest in general intangibles by filing a UCC-1 financing statement. In North Carolina, you file it with the Secretary of State.

Although we can categorize bitcoin as a general intangible for Article 9 purposes, and create and perfect a security interest accordingly, several issues arise that question the overall effectiveness of that security interest. For starters, a security interest in general intangibles follows the sale, license, or other disposition of the collateral, unless the secured party consents to the transfer free of its security interest, the obligations secured by the security interest have been satisfied, or the security interest has otherwise terminated.

This is a problem for the lender wanting a first-priority lien on the bitcoin. Before approaching the lender, the borrower may have granted a secured party a security interest in bitcoin, or granted a security interest in “all assets whether now owned or acquired later” and then acquired bitcoin. In both instances, the bitcoin is encumbered by the security interest. The lender could not confirm prior liens without searching UCC-1 filings in all 50 states (and even that might not catch international liens).

Even if a secured party acquires a senior lien in bitcoin, that party still has the problem of lack of control over the bitcoin. Without control, bitcoin collateral is susceptible to unauthorized transfers. Even if a borrower has an account at an online currency exchange like Coinbase—which allows you to exchange bitcoin into traditional money—the exchange may be unwilling to sign a tri-party control agreement to restrict the debtor’s ability to exchange the bitcoin. Upon default, without the debtor’s cooperation, it will be difficult or impossible to enforce, take possession, and liquidate the bitcoin.

Conclusion

Putting aside its value and volatility, the intrinsically unique nature of bitcoin makes it an imperfect and problematic form of collateral under Article 9. Part II of this article will discuss the pros and cons of using Article 8 of the UCC to create and perfect a security interest in bitcoin. Article 8 has the potential to be a safer and more reliable solution for these transactions.

© 2018 Ward and Smith, P.A.. All Rights Reserved.

This post was written by Lance P. Martin of Ward and Smith, P.A.

               

New OCR Checklist Outlines How Health Care Facilities Can Fight Cyber Extortion

As technology has advanced, cyber extortion attacks have risen, and they will continue to be a major security issue for organizations. Cyber extortion can take many forms, but it typically involves cybercriminals demanding money to stop or delay their malicious activities, which include stealing sensitive data or disrupting computer services. Health care and public health sector organizations that maintain sensitive data are often targets for cyber extortion attacks.

Ransomware is a form of cyber extortion where attackers deploy malware targeting an organization’s data, rendering it inaccessible, typically by encryption. The attackers then demand money in exchange for an encryption key to decrypt the data. Even after payment is made, organizations may still lose some of their data.

Other forms of cyber extortion include Denial of Service (DoS) and Distributed Denial of Service (DDoS) attacks. These attacks normally direct a high volume of network traffic to targeted computers so the affected computers cannot respond and are otherwise inaccessible to legitimate users. Here, an attacker may initiate a DoS or DDoS attack against an organization and demand payment to stop the attack.

Additionally, cyber extortion can occur when an attacker gains access to an organization’s computer system, steals sensitive data from the organization and threatens to publish that data. The attacker threatens revealing sensitive data, including protected health information (PHI), to coerce payment.

On January 30, 2018, the HHS Office for Civil Rights (OCR) published a checklist to assist HIPAA covered entities and business associates on how to respond to a cyber extortion attack. Organizations can reduce the chances of a cyber extortion attack by:

  • Implementing a robust risk analysis and risk management program that identifies and addresses cyber risks holistically, throughout the entire organization;
  • Implementing robust inventory and vulnerability identification processes to ensure accuracy and thoroughness of the risk analysis;
  • Training employees to better identify suspicious emails and other messaging technologies that could introduce malicious software into the organization;
  • Deploying proactive anti-malware solutions to identify and prevent malicious software intrusions;
  • Patching systems to fix known vulnerabilities that could be exploited by attackers or malicious software;
  • Hardening internal network defenses and limiting internal network access to deny or slow the lateral movement of an attacker and/or propagation of malicious software;
  • Implementing and testing robust contingency and disaster recovery plans to ensure the organization is capable and ready to recover from a cyber-attack;
  • Encrypting and backing up sensitive data;
  • Implementing robust audit logs and reviewing such logs regularly for suspicious activity; and
  • Remaining vigilant for new and emerging cyber threats and vulnerabilities.

If a cyber extortion attack does happen, organizations should be prepared to take the necessary steps to prevent any more damage. In the event of a cyber-attack or similar emergency an entity:

  • Must execute its response and mitigation procedures and contingency plans;
  • Should report the crime to other law enforcement agencies, which may include state or local law enforcement, the Federal Bureau of Investigation (FBI) and/or the Secret Service. Any such reports should not include protected health information, unless otherwise permitted by the HIPAA Privacy Rule;
  • Should report all cyber threat indicators to federal and information-sharing and analysis organizations (ISAOs), including the Department of Homeland Security, the HHS Assistant Secretary for Preparedness and Response, and private-sector cyber-threat ISAOs.
  • Must report the breach to OCR as soon as possible, but no later than 60 days after the discovery of a breach affecting 500 or more individuals, and notify affected individuals and the media unless a law enforcement official has requested a delay in the reporting. An entity that discovers a breach affecting fewer than 500 individuals has an obligation to notify individuals without unreasonable delay, but no later than 60 days after discovery; and OCR within 60 days after the end of the calendar year in which the breach was discovered.
© 2018 Dinsmore & Shohl LLPDinsmore & Shohl LLP. All rights reserved.