Media Education Is Crucial to Preparing Young Attorneys to Speak on the Record

Last month, a photojournalist for The Daily Northwestern, Northwestern University’s campus newspaper, captured photographs of student protestors who rushed a lecture hall where former Attorney General Jeff Sessions was speaking on campus. One of the pictures the photojournalist published featured a protestor sprawled on the floor. Students involved in the protest reacted with sharp criticism: being photographed in public had caused the protestor trauma, they argued. In addition, the reporters who used the student directory to attempt to contact protestors for quotes had invaded those students’ privacy.

In response to this pressure, editors at the newspaper took the photographs down and published an apology — steps that were immediately scorned by seasoned media professionals who explained that reporting on public events, through gathering quotes and taking pictures, is one of the most basic functions of journalism.

As with many stories that go viral, overheated Twitter commentary led to cross-generation attacks, straw-man arguments and handwringing over the death of traditional media. But when you push aside the noise around this story, it becomes clear that what happened at Northwestern illuminates an interesting disconnect between young people on the cusp of the Millennial-Z generations and the rest of us: we have different ideas about the purpose and function of traditional media.

What does this have to do with legal marketing? The oldest members of Generation Z are preparing to enter law school in the fall of 2020, which means firms are just a few years out from welcoming this new crop of lawyers. Forward-thinking law firms have long understood the value of media training in helping their attorneys build fruitful relationships with reporters and manage individual and firm brands across multiple channels. The Northwestern case, however, demonstrates that firms must also be prepared to offer some basic media education to their business development curriculum.

Younger lawyers may have a steep learning curve if they want to launch their careers with a productive media strategy. Here are three lessons firms will need to figure out how to teach them:

It’s hard to understand what you don’t consume. As social media has become such a central part of the way we broadcast and receive information, it fills the role traditional media used to play in some people’s lives. Not only does this mean that fewer people are reading the newspaper and relying on quality objective journalism to understand the world, but that inexperience with traditional media also breeds ignorance about what reporters, including specialists in the legal media, do all day and why they do it.

A young attorney who does not read the most important media outlets in the legal industry may not have a proper understanding of how law leaders use the information and data reporters publish to make business decisions and innovate at the practice and firm level. While managing partners may not always be pleased with the coverage of their firm, they understand and accept that the health of the industry relies on these sources of objective information. What’s more, for every article that makes a law partner squirm, there is one that amplifies a firm’s accomplishments for the entire industry to see.

Those media mentions are worth their weight in gold, but you have to respect and understand the institution of legal journalism as a whole to ever have a chance at winning one for yourself or your firm.

Not all media is the same. The media landscape of 2019 exists across four categories: paid, owned, shared, and earned. Paid media is sponsored content and pay-to-play awards and features. Owned media is the content your firm creates and distributes through your website and newsletter. Shared media is social media and all the content it spreads so rapidly. And earned media encompasses mentions in traditional media outlets.

A sophisticated communications strategy creates a plan for all four categories and, importantly, recognizes the strengths and weaknesses of each one. The first step to making sense of it all is to recognize the tension between control and authority. Media that allows your firm complete control over the content — your Twitter feed, for example — does not carry much authority. Consumers understand that anyone can make any claim they like on the internet. Media outlets that carry authority in the industry — such as Bloomberg Law or the Wall Street Journal — are not going to offer you much control over the content. Their independence is what gives them authority.

Attorneys who are too focused on controlling the message will miss out on the chance to see their work featured in an outlet that prospective clients and recruits actually trust.

Your right to privacy is not unlimited in scope. While individuals, of course, have the right to live their private lives free from interference, attorneys engaged in work on behalf of law firms and companies, which in many cases involves actions that are matters of public record, should expect to occasionally face questions about that work. Fearing these encounters or, worse, painting this healthy professional interaction as some kind of victimization, is bad for both the legal industry and an attorney’s own career development. Attorneys who understand the role traditional media plays in their business development make themselves available to reporters and are ready to speak off the cuff about their cases, clients and the broader context of legal questions they spend time on.

Savvy lawyers have confidence that their integrity and expertise will stand up to scrutiny by a reporter, and they extend professional courtesy to journalists doing the hard work of chronicling a complex and dynamic industry.

As the media landscape continues to evolve, marketers and firm leaders will have to work harder than ever to play in all four media categories — paid, owned, shared and earned — and prepare their attorneys to build productive relationships with the reporters who can help them reach their desired audience.


© 2019 Page2 Communications. All rights reserved.

This article was written by Debra Pickett of Page 2 Communications.
For more advice for young lawyers, see the National Law Review Law Office Management section.

Privacy Tip #219 – FBI Considers FaceApp a Counterintelligence Threat

For those of you who have downloaded the face editing app FaceApp, please note that the Federal Bureau of Investigation (FBI) has classified FaceApp as a counterintelligence threat because of its Russian origins.

According to the FBI, “[T]he FBI considers any mobile application or similar product developed in Russia, such as FaceApp, to be a potential counterintelligence threat, based on the data the product collects, its privacy and terms of use policies, and the legal mechanisms available to the Government of Russia that permit access to data within Russia’s borders.”

When the FBI considers an app a security threat to the U.S., we all should. Downloading apps, in general, is risky, but downloading apps based in foreign countries that are trying to obtain information about U.S. citizens – and in fact are obtaining information from unwitting U.S. citizens – is potentially putting us in danger.

Now is the time to perform app hygiene. Check the apps on your phone to determine whether you are using them or not. If you aren’t using them, delete them. There is no reason to continue to allow them to collect your information if you are not using them and getting a benefit from them. If you are using them and can’t live without them, do some due diligence to determine the background of the app, read the Privacy Policy and Terms of Use to know what they are collecting and using about you, and delete the app if your gut tells you something’s not right. If you have downloaded FaceApp, that would be the first one to delete.


Copyright © 2019 Robinson & Cole LLP. All rights reserved.

China’s TikTok Facing Privacy & Security Scrutiny from U.S. Regulators, Lawmakers

Perhaps it is a welcome reprieve for Facebook, Google and YouTube. A competing video-sharing social media company based in China has drawn the attention of U.S. privacy officials and lawmakers, with a confidential investigation under way and public hearings taking place on Capitol Hill.

Reuters broke the story that the Treasury Department’s Committee on Foreign Investment in the United States (CFIUS) is conducting a national security review of the owners of TikTok, a social media video-sharing platform that claims a young but formidable U.S. audience of 26.5 million users. CFIUS is engaged in the context of TikTok owner ByteDance Technology Co.’s $1 billion acquisition of U.S. social media app Musical.ly two years ago, a deal ByteDance did not present to the agency for review.

Meanwhile, U.S. legislators are concerned about censorship of political content, such as coverage of protests in Hong Kong, and the location and security of personal data the company stores on U.S. citizens.

Sen. Josh Hawley (R-Mo.), Chairman of the Judiciary Committee’s Subcommittee on Crime and Terrorism, invited TikTok and others to testify in Washington this week for hearings titled “How Corporations and Big Tech Leave Our Data Exposed to Criminals, China, and Other Bad Actors.”

While TikTok did not send anyone to testify, the company’s recently appointed General Manager for North America and Australia Vanessa Pappas, formerly with YouTube, sent a letter indicating that it did not store data on U.S. citizens in China. She explained in an open letter on the TikTok website, which reads similarly to that reportedly sent to the subcommittee, that the company is very much aware of its privacy obligations and U.S. regulations and is taking a number of measures to address its obligations.

For nearly eight years Pappas served as Global Head of Creative Insights and before that Audience Development for YouTube. In late 2018 she was strategic advisor to ByteDance, and in January 2019 became TikTok’s U.S. General Manager. In July her territory expanded to North America and Australia. Selecting someone who played such a leadership position for YouTube, widely used and familiar to Americans, to lead U.S. operations may serve calm the nerves of U.S. regulators. But given U.S. tensions with China over trade, security and intellectual property, TikTok and Pappas have a way to go.

Some commentators think Facebook must enjoy watching TikTok getting its turn in the spotlight, especially since TikTok is a growing competitor to Facebook in the younger market. If just briefly, it may divert attention away from the attention being paid globally to the social media giant’s privacy and data collection practices, and the many fines.

It’s clear that TikTok has Facebook’s attention. TikTok, which allows users to create and share short videos with special effects, did a great deal of advertising on Facebook. The ads were clearly targeting the teen demographic and were apparently successful. CEO Mark Zuckerberg recently said in a speech that mentions of the Hong Kong protests were censored in TikTok feeds in China and to the United States, something TikTok denied. In a case of unfortunate timing, Zuckerberg this week posted that 100 or so software developers may have improperly accessed Facebook user data.

Since TikTok is largely a short-video sharing application, it competes at some level with YouTube in the youth market. In the third quarter of 2019, 81 percent of U.S. internet users aged 15 to 25 accessed YouTube, according to figures collected by Statista. YouTube boasts more than 126 million monthly active users in the U.S., 100 million more than TikTok.

Potential counterintelligence ‘we cannot ignore’

Last month, U.S. Senate Minority Leader Chuck Schumer (D-NY) and Senator Tom Cotton (R-AR) asked Acting Director of National Intelligence to conduct a national security probe of TikTok and other Chinese companies. Expressing concern about the collection of user data, whether the Chinese government censors content feeds to the U.S., as Zuckerberg suggested, and whether foreign influencers were using TikTok to advance their objectives.

“With over 110 million downloads in the U.S. alone,” the Schumer and Cotton letter read, “TikTok is a potential counterintelligence threat we cannot ignore. Given these concerns, we ask that the Intelligence Community conduct an assessment of the national security risks posed by TikTok and other China-based content platforms operating in the U.S. and brief Congress on these findings.” They must be happy with Sen. Hawley’s hearings.

In her statement, TikTok GM Pappas offered the following assurances:

  • U.S. user data is stored in the United States with backup in Singapore — not China.
  • TikTok’s U.S. team does what’s best for the U.S. market, with “the independence to do so.”
  • The company is committed to operating with greater transparency.
  • California-based employees lead TikTok’s moderation efforts for the U.S.
  • TikTok uses machine learning tools and human content reviews.
  • Moderators review content for adherence to U.S. laws.
  • TikTok has a dedicated team focused on cybersecurity and privacy policies.
  • The company conducts internal and external reviews of its security practices.
  • TikTok is forming a committee of users to serve them responsibly.
  • The company has banned political advertising.

Both TikToc and YouTube have been stung by failing to follow the rules when it comes to the youth and children’s market. In February, TikTok agreed to pay $5.7 million to settle the FTC’s case which allege that, through the Musical.ly app, TikTok company illegally collected personal information from children. At the time it was the largest civil penalty ever obtained by the FTC in a case brought under the Children’s Online Privacy Protection Act (COPPA). The law requires websites and online services directed at children obtain parental consent before collecting personal information from kids under 13. That record was smashed in September, though, when Google and its YouTube subsidiary agreed to pay $170 million to settle allegations brought by the FTC and the New York Attorney General that YouTube was also collecting personal information from children without parental consent. The settlement required Google and YouTube to pay $136 million to the FTC and $34 million to New York.

Quality degrades when near-monopolies exist

What I am watching for here is whether (and how) TikTok and other social media platforms respond to these scandals by competing on privacy.

For example, in its early years Facebook lured users with the promise of privacy. It was eventually successful in defeating competitors that offered little in the way of privacy, such as MySpace, which fell from a high of 75.9 million users to 8 million today. But as Facebook developed a dominant position in social media through acquisition of competitors like Instagram or by amassing data, the quality of its privacy protections degraded. This is to be expected where near-monopolies exist and anticompetitive mergers are allowed to close.

Now perhaps the pendulum is swinging back. As privacy regulation and publicity around privacy transgressions increase, competitive forces may come back into play, forcing social media platforms to compete on the quality of their consumer privacy protections once again. That would be a great development for consumers.

 


© MoginRubin LLP

ARTICLE BY Jennifer M. Oliver of MoginRubin.
Edited by Tom Hagy for MoginRubin LLP.
For more on social media app privacy concerns, see the National Law Review Communications, Media & Internet law page.

California DMV Exposes 3,200 SSNs of Drivers

The California Department of Motor Vehicles (DMV) announced on November 5, 2019, that it allowed the Social Security numbers (SSNs) of 3,200 California drivers to be accessed by unauthorized individuals in other state and federal agencies, including the Internal Revenue Service, the Small Business Administration and the district attorneys’ offices in Santa Clara and San Diego counties.

According to a news report, the access included the full Social Security numbers of individuals who were being investigated for criminal activity or compliance with tax laws. Apparently, the access also allowed investigators to see which drivers didn’t have Social Security numbers, which has given immigration advocates concern.

The DMV stated that the incident was not a hack, but rather, an error, and the unauthorized access was terminated when it was discovered on August 2, 2019. Nonetheless, the DMV notified the 3,200 drivers of the incident and the exposure of their personal information. The DMV issued a statement that it has “taken additional steps to correct this error, protect this information and reaffirm our serious commitment to protect the privacy rights of all license holders.”

 

Copyright © 2019 Robinson & Cole LLP. All rights reserved.
For more on data security, see the National Law Review Communications, Media & Internet law page.

Can You Spy on Your Employees’ Private Facebook Group?

For years, companies have encountered issues stemming from employee communications on social media platforms. When such communications take place in private groups not accessible to anyone except approved members, though, it can be difficult for an employer to know what actually is being said. But can a company try to get intel on what’s being communicated in such forums? A recent National Labor Relations Board (NLRB) case shows that, depending on the circumstances, such actions may violate labor law.

At issue in the case was a company that was facing unionizing efforts by its employees. Some employees of the company were members of a private Facebook group and posted comments in the group about potentially forming a union. Management became aware of this activity and repeatedly asked one of its employees who had access to the group to provide management with reports about the comments. The NLRB found this conduct to be unlawful and held: “It is well-settled that an employee commits unlawful surveillance if it acts in a way that is out of the ordinary in order to observe union activity.”

This case provides another reminder that specific rules come into play when employees are considering forming a union. Generally, companies cannot:

  • Threaten employees based on their union activity
  • Interrogate workers about their union activity, sentiments, etc.
  • Make promises to employees to induce them to forgo joining a union
  • Engage in surveillance (i.e., spying) on workers’ union organizing efforts

The employer’s “spying” in this instance ran afoul of these parameters, which can have costly consequences, such as overturned discipline and backpay awards.


© 2019 BARNES & THORNBURG LLP

For more on employees’ social media use, see the National Law Review Labor & Employment law page.

Hackers Eavesdrop and Obtain Sensitive Data of Users Through Home Smart Assistants

Although Amazon and Google respond to reports of vulnerabilities in popular home smart assistants Alexa and Google Home, hackers continually work hard to exploit any vulnerabilities to be able to listen to users’ every word to obtain sensitive information that can be used in future attacks.

Last week, it was reported by ZDNet that two security researchers at Security Research Labs (SRLabs) discovered that phishing and eavesdropping vectors are being used by hackers to “provide access to functions that developers can use to customize the commands to which a smart assistant responds, and the way the assistant replies.” The hackers can use the technology that Amazon and Google provides to app developers for the Alexa and Google Home products.

By putting certain commands into the back end of a normal Alexa/Google Home app, the attacker can silence the assistant for long periods of time, although the assistant is still active. After the silence, the attacker sends a phishing message, which makes the user believe had nothing to do with the app that they interacted with. The user is then asked for the Amazon/Google password and sends a fake message to the user that looks like it is from Amazon or Google. The user is then sent a message claiming to be from Amazon or Google and asking for the user’s password. Once the hacker has access to the home assistant, the hacker can eavesdrop on the user, keep the listening device active and record the users’ conversations. Obviously, when attackers eavesdrop on every word, even when it appears the device is turned off, they can obtain information that is highly personal and can be used malevolently in the future.

The manufacturers of the home smart assistants reiterate to users that the devices will never ask for their account password. Cyber hygiene for home assistants is no different than cyber hygiene with emails.


Copyright © 2019 Robinson & Cole LLP. All rights reserved.

For more hacking risk mitigation, see the National Law Review Communications, Media & Internet law page.

CCPA Alert: California Attorney General Releases Draft Regulations

On October 10, 2019, the California Attorney General released the highly anticipated draft regulations for the California Consumer Privacy Act (CCPA). The regulations focus heavily on three main areas: 1) notices to consumers, 2) consumer requests and 3) verification requirements. While the regulations focus heavily on these three topics, they also discuss special rules for minors, non-discrimination standards and other aspects of the CCPA. Despite high hopes, the regulations do not provide the clarity many companies desired. Instead, the regulations layer on new requirements while sprinkling in further ambiguities.

The most surprising new requirements proposed in the regulations include:

  • New disclosure requirements for businesses that collect personal information from more than 4,000,000 consumers
  • Businesses must acknowledge the receipt of consumer requests within 10 days
  • Businesses must honor “Do Not Sell” requests within 15 days and inform any third parties who received the personal information of the request within 90 days
  • Businesses must obtain consumer consent to use personal information for a use not disclosed at the time of collection

The following are additional highlights from each of the three main areas:

1. Notices to consumers

The regulations discuss four types of notices to consumers: notice at the time of collection, notice of the right to opt-out of the sale of personal information, notice of financial incentives and a privacy policy. All required notices must be:

  • Easy to read in plain, straightforward language
  • In a format that draws the consumer’s attention to the notice
  • Accessible to those with disabilities
  • Available in all languages in which the company regularly conducts business

The regulations make clear that it is necessary, but not sufficient, to update your privacy policy to be compliant with CCPA. You must also provide notice to consumers at the time of data collection, which must be visible and accessible before any personal information is collected. The regulations make clear that no personal information may be collected without proper notice. You may use your privacy policy as the notice at the time of collection, but you must link to a specific section of your privacy policy that provides the statutorily required notice.

The regulations specifically provide that for offline collection, businesses could provide a paper version of the notice or post prominent signage. Similar to General Data Protection Regulation (GDPR), a company may only use personal information for the purposes identified at the time of collection. Otherwise, the business must obtain explicit consent to use the personal information for a new purpose.

In addition to the privacy policy requirements in the statute itself, the regulations require more privacy policy disclosures. For example, the business must include instructions on how to verify a consumer request and how to exercise consumer rights through an agent. Further, the privacy policy must identify the following information for each category of personal information collected: the sources of the information, how the information is used and the categories of third parties to whom the information is disclosed. For businesses that collect personal information of 4,000,000 or more consumers, the regulations require additional disclosures related to the number of consumer requests and the average response times. Given the additional nuances of the disclosure requirements, we recommend working with counsel to develop your privacy policy.

If a business provides financial incentives to a consumer for allowing the sale of their personal information, then the business must provide a notice of the financial incentive. The notice must include a description of the incentive, its material terms, instructions on how to opt-in to the incentive, how to withdraw from the incentive and an explanation of why the incentive is permitted by CCPA.

Finally, the regulations state that service providers that collect personal information on behalf of a business may not use that personal information for their own purposes. Instead, they are limited to performing only their obligations under the contract between the business and service provider. The contract between the parties must also include the provisions described in CCPA to ensure that the relationship is a service provider/business relationship, and not a sale of personal information between a business and third party.

2. Consumer requests

Businesses must provide at least two methods for consumers to submit requests (most commonly an online form and a toll-free number), and one of the methods must reflect the manner in which the business primarily interacts with the consumer. In addition, businesses that substantially interact with consumers offline must provide an offline method for consumers to exercise their right to opt-out, such as providing a paper form. The regulations specifically call out that in-person retailers may therefore need three methods: a paper form, an online form and a toll-free number.

The regulations do limit some consumer request rights by prohibiting the disclosure of Social Security numbers, driver’s license numbers, financial account numbers, medical-related identification numbers, passwords, and security questions and answers. Presumably, this is for two reasons: the individual should already know this information and most of these types of information are subject to exemptions from CCPA.

One of the most notable clarifications related to requests is that the 45-day timeline to respond to a consumer request includes any time required to verify the request. Additionally, the regulations introduce a new timeline requirement for consumer requests. Specifically, businesses must confirm receipt of a request within 10 days. Another new requirement is that businesses must respond to opt-out requests within 15 days and must inform all third parties to stop selling the consumer’s information within 90 days. Further, the regulations require that businesses maintain request records logs for 24 months.

3. Verification requirements

The most helpful guidance in the regulations relates to verification requests. The regulations provide that a more rigorous verification process should apply to more sensitive information. That is, businesses should not release sensitive information without being highly certain about the identity of the individual requesting the information. Businesses should, where possible, avoid collecting new personal information during the verification process and should instead rely on confirming information already in the business’ possession. Verification can be through a password-protected account provided that consumers re-authenticate themselves. For websites that provision accounts to users, requests must be made through that account. Matching two data points provided by the consumer with data points maintained by the business constitutes verification to a reasonable degree of certainty, and the matching of three data points constitutes a high degree of certainty.

The regulations also provide prescriptive steps of what to do in cases where an identity cannot be verified. For example, if a business cannot verify the identity of a person making a request for access, then the business may proceed as if the consumer requested disclosure of only the categories of personal information, as opposed to the content of such personal information. If a business cannot verify a request for deletion, then the business should treat the request as one to opt-out of the sale of personal information.

Next steps

These draft regulations add new wrinkles, and some clarity, to what is required for CCPA compliance. As we move closer to January 1, 2020 companies should continue to focus on preparing compliant disclosures and notices, finalizing their privacy policies and establishing procedures to handle consumer requests. Despite the need to press forward on compliance, the regulations are open to initial public comment until December 6, 2019, with a promise to finalize the regulations in the spring of 2020. We expect further clarity as these draft regulations go through the comment process and privacy professionals, attorneys, businesses and other stakeholders weigh in on their clarity and reasonableness.


Copyright © 2019 Godfrey & Kahn S.C.

For more on CCPA implementation, see the National Law Review Consumer Protection law page.

LinkedIn Petitions Circuit Court for En Banc Review of hiQ Scraping Decision

On October 11, 2019, LinkedIn Corp. (“LinkedIn”) filed a petition for rehearing en banc of the Ninth Circuit’s blockbuster decision in hiQ Labs, Inc. v. LinkedIn Corp., No. 17-16783 (9th Cir. Sept. 9, 2019). The crucial question before the original panel concerned the scope of Computer Fraud and Abuse Act (CFAA) liability to unwanted web scraping of publicly available social media profile data and whether once hiQ Labs, Inc. (“hiQ”), a data analytics firm, received LinkedIn’s cease-and-desist letter demanding it stop scraping public profiles, any further scraping of such data was “without authorization” within the meaning of the CFAA. The appeals court affirmed the lower court’s order granting a preliminary injunction barring LinkedIn from blocking hiQ from accessing and scraping publicly available LinkedIn member profiles to create competing business analytic products. Most notably, the Ninth Circuit held that hiQ had shown a likelihood of success on the merits in its claim that when a computer network generally permits public access to its data, a user’s accessing that publicly available data will not constitute access “without authorization” under the CFAA.

In its petition for en banc rehearing, LinkedIn advanced several arguments, including:

  • The hiQ decision conflicts with the Ninth Circuit Power Ventures precedent, where the appeals court held that a commercial entity that accesses a website after permission has been explicitly revoked can, under certain circumstances, be civilly liable under the CFAA. Power Ventures involved Facebook user data protected by password (that users initially allowed a data aggregator permission to access). LinkedIn argued that the hiQ court’s logic in distinguishing Power Ventures was flawed and that the manner in which a user classifies his or her profile data should have no bearing on a website owner’s right to protect its physical servers from trespass.

“Power Ventures thus holds that computer owners can deny authorization to access their physical servers within the meaning of the CFAA, even when users have authorized access to data stored on the owner’s servers. […] Nothing about a data owner’s decision to place her data on a website changes LinkedIn’s independent right to regulate who can access its website servers.”

  • The language of the CFAA should not be read to allow for “authorization” to be assumed (and unable to be revoked) for publicly available website data, either under Ninth Circuit precedent or under the CFAA-related case law of other circuits.

“Nothing in the CFAA’s text or the definition of ‘authorization’ that the panel employed—“[o]fficial permission to do something; sanction or warrant,” suggests that enabling websites to be publicly viewable is not ‘authorization’ that can be revoked.”

  • The privacy interests enunciated by LinkedIn on behalf of its users is “of exceptional importance,” and the court discounted the fact that hiQ is “unaccountable” and has no contractual relationship with LinkedIn users, such that hiQ could conceivably share the scraped data or aggregate it with other data.

“Instead of recognizing that LinkedIn members share their information on LinkedIn with the expectation that it will be viewed by a particular audience (human beings) in a particular way (by visiting their pages)—and that it will be subject to LinkedIn’s sophisticated technical measures designed to block automated requests—the panel assumed that LinkedIn members expect that their data will be ‘accessed by others, including for commercial purposes,’ even purposes antithetical to their privacy setting selections. That conclusion is fundamentally wrong.

Both website operators and open internet advocates will be watching closely to see if the full Ninth Circuit decides to rehear the appeal, given the importance of the CFAA issue and the prevalence of data scraping of publicly available website content. We will keep a close watch on developments.


© 2019 Proskauer Rose LLP.

Whatever Happened to that Big Ringless Voicemail Decision We Were All Expecting? It Was a Nothing Burger—For Now

You’ll recall a few weeks back TCPAWorld.com featured analysis of efforts by VoApps—makers of the DirectDrop ringless voicemail platformto stem the tide of negative TCPA rulings addressing ringless voicemail technologies. VoApps founder David King even joined the Unprecedented podcast to discuss his submission of a lengthy declaration to the court addressing how the technology works and why it is not covered by the TCPA.

Well, a few days ago the Court issued its ruling on the pending motion—a summary judgment effort by the Plaintiff—and I must say, it was rather anti-climactic. Indeed, the court punted entirely on the key issue.

In Saunders v. Dyck O’Neal, Case No. 1:17-CV-335, 2019 U.S. Dist. LEXIS 177606 (W.D. Mich. Oct. 4, 2019) the court issued its highly-anticipated ruling on the Plaintiff’s bid to earn judgment following the Court’s earlier ruling that a ringless voicemail is a call under the TCPA. It was in response to this motion that VoApps submitted a mountain of evidence that although a ringless voicemail may be a “call” it is not a call to a number assigned to a cellular service—and so such calls are not actionable under the TCPA’s infamous section 227(b).

Rather than answer the question directly the Court made mincemeat of the Federal Rules of Civil Procedure and treated the summary judgment motion as if it were some sort of motion to confirm the Court’s earlier ruling. This is weird because: i) no it wasn’t; and ii) there’s no such thing. As the Court put it: “Admittedly, Saunders moved for summary judgment, but her motion is in fact limited to a request for clarification of the impact of the Court’s prior ruling: Was the Court’s prior ruling that DONI’s messaging technology falls within the purview of the TCPA a ruling as a matter of law that binds the parties going forward? The answer is clearly yes.”

Great. So we now know what we already all knew—the Saunders court holds that a ringless voicemail is a call. Got it. As to the key issue of whether the calls were made to a landline to a cell phone, however, the Court finds: “These issues were unnecessary to Saunders’s motion, as she has not [actually] moved for summary judgment on her claim.”

So there you go. Plaintiff’s motion for summary judgment was not actually a motion for summary judgment after all. So all that work by VoApps was for nothing. But not really. Obviously this fight is not yet over. The Court declined to enter judgment in favor of the Plaintiff meaning that further work—and perhaps a trial—lies ahead for the good folks over at VoApps. We’ll keep you posted.

 



© Copyright 2019 Squire Patton Boggs (US) LLP

For more on voicemail & phone regulation, see the National Law Review Communications, Media & Internet law page.

Resist the Urge to Access: the Impact of the Stored Communications Act on Employer Self-Help Tactics

As an employer or manager, have you ever collected a resigning employee’s employer-owned laptop or cellphone and discovered that the employee left a personal email account automatically logged in? Did you have the urge to look at what the employee was doing and who the employee was talking to right before resigning? Perhaps to see if he or she was talking to your competitors or customers? If so, you should resist that urge.

The federal Stored Communications Act, 18 U.S.C. § 2701et seq., is a criminal statute that makes it an offense to “intentionally access[ ]without authorization a facility through which an electronic communication service is provided[ ]and thereby obtain[ ] . . . access to a[n] . . . electronic communication while it is in electronic storage  . . . .” It also creates a civil cause of action for victims of such offenses, remedied by (i) actual damages of at least $1,000; (ii) attorneys’ fees and court costs; and, potentially, (iii) punitive damages if the access was willful or intentional.

So how does this criminal statute apply in a situation in which an employee uses a personal email account on an employer-owned electronic device—especially if an employment policy confirms there is no expectation of privacy on the employer’s computer systems and networks? The answer is in the technology itself.

Many courts find that the “facility” referenced in the statute is the server on which the email account resides—not the company’s computer or other electronic device. In one 2013 federal case, a former employee left her personal Gmail account automatically logged in when she returned her company-owned smartphone. Her former supervisor allegedly used that smartphone to access over 48,000 emails on the former employee’s personal Gmail account. The former employee later sued her former supervisor and her former employer under the Stored Communications Act. The defendants moved to dismiss the claim, arguing, among other things, that a smartphone was not a “facility” under the statute.

While agreeing with that argument in principle, the court concluded that it was, in fact, Gmail’s server that was the “facility” for purposes of Stored Communications Act claims. The court also rejected the defendants’ arguments (i) that because it was a company-owned smartphone, the employee had in fact authorized the review, and (ii) that the former employee was responsible for any alleged loss of privacy, because she left the door open to the employer reviewing the Gmail account.

Similarly, in a 2017 federal case, a former employee sued her ex-employer for allegedly using her returned cell phone to access her Gmail account on at least 40 occasions. To assist in the prosecution of a restrictive covenant claim against the former employee, the former employer allegedly arranged to forward several of those emails to the employer’s counsel, including certain allegedly privileged emails between the former employee and her lawyer. The court denied the former employer’s motion to dismiss the claim based on those allegations.

Interestingly, some courts, including both in the above-referenced cases, draw a line on liability under the Stored Communication Act based on whether the emails that were accessed were already opened at the time of access. This line of reasoning is premised on a finding that opened-but-undeleted emails are not in “storage for backup purposes” under the Stored Communications Act. But this distinction is not universal.

In another 2013 federal case, for example, an individual sued his business partner under the Stored Communications Act after the defendant logged on to the other’s Yahoo account using his password. A jury trial resulted in a verdict for the plaintiff on that claim, and the defendant filed a motion for judgment as a matter of law. The defendant argued that she only read emails that had already been opened and that they were therefore not in “electronic storage” for “purposes of backup protection.” The court disagreed, stating that “regardless of the number of times plaintiff or defendant viewed plaintiff’s email (including by downloading it onto a web browser), the Yahoo server continued to store copies of those same emails that previously had been transmitted to plaintiff’s web browser and again to defendant’s web browser.” So again, the court read the Stored Communications Act broadly, stating that “the clear intent of the SCA was to protect a form of communication in which the citizenry clearly has a strong reasonable expectation of privacy.”

Based on the broad reading of the Stored Communications Act in which many courts across the country engage, employers and managers are well advised to exercise caution before reviewing an employee’s personal communications that may be accessible on a company electronic device. Even policies informing employees not to expect privacy on company computer systems and networks may not save the employer or manager from liability under the statute. So seek legal counsel if this opportunity presents itself upon an employee’s separation from the company. And resist the urge to access before doing so.


© 2019 Foley & Lardner LLP
For more on the Stored Communications Act, see the National Law Review Communications, Media & Internet law page.