My Business Is In Arizona, Why Do I Care About California Privacy Laws? How the CCPA Impacts Arizona Businesses

Arizona businesses are not typically concerned about complying with the newest California laws going into effect. However, one California law in particular—the CCPA or California Consumer Privacy Act—has a scope that extends far beyond California’s border with Arizona. Indeed, businesses all over the world that have customers or operations in California must now be mindful of whether the CCPA applies to them and, if so, whether they are in compliance.

What is the CCPA?

The CCPA is a comprehensive data privacy regulation enacted by the California Legislature that became effective on January 1, 2020. It was passed on September 13, 2018 and has undergone a series of substantive amendments over the past year and a few months.

Generally, the CCPA gives California consumers a series of rights with respect to how companies acquire, store, use, and sell their personal data. The CCPA’s combination of mandatory disclosures and notices, rights of access, rights of deletion, statutory fines, and threat of civil lawsuits is a significant move towards empowering consumers to control their personal data.

Many California businesses are scrambling to implement the necessary policies and procedures to comply with the CCPA in 2020. In fact, you may have begun to notice privacy notices on the primary landing page for national businesses. However, Arizona businesses cannot assume that the CCPA stops at the Arizona border.

Does the CCPA apply to my business in Arizona?

The CCPA has specific criteria for whether a company is considered a California business. The CCPA applies to for-profit businesses “doing business in the State of California” that also:

  • Have annual gross revenues in excess of twenty-five million dollars; or
  • Handle data of more than 50,000 California consumers or devices per year; or
  • Have 50% or more of revenue generated by selling California consumers’ personal information

The CCPA does not include an express definition of what it means to be “doing business” in California. While it will take courts some time to interpret the scope of the CCPA, any business with significant sales, employees, property, or operations in California should consider whether the CCPA might apply to them.

How do I know if I am collecting a consumer’s personal information?

“Personal information” under the CCPA generally includes any information that “identifies, relates to, describes, is capable of being associated with, or could reasonably be linked” with a specific consumer. As the legalese of this definition implies, “personal information” includes a wide swath of data that your company may already be collecting about consumers.

There is no doubt that personal identifiers like name, address, email addresses, social security numbers, etc. are personal information. But information like biometric data, search and browsing activity, IP addresses, purchase history, and professional or employment-related information are all expressly included under the CCPA’s definition. Moreover, the broad nature of the CCPA means that other categories of data collected—although not expressly identified by the CCPA—may be deemed to be “personal information” in an enforcement action.

What can I do to comply with the CCPA?

If the CCPA might apply to your company, now is the time to take action. Compliance will necessarily be different for each business depending on the nature of its operation and the use(s) of personal information. However, there are some common steps that each company can take.

The first step towards compliance with the CCPA is understanding what data your company collects, how it is stored, whether it is transferred or sold, and whether any vendors or subsidiaries also have access to the data. Next, an organization should prepare a privacy notice that complies with the CCPA to post on its website and include in its app interface.

The most substantial step in complying with the CCPA is to develop and implement policies and procedures that help the company conform to the various provisions of the CCPA. The policies will need to provide up-front disclosures to consumers, allow consumers to opt-out, handle consumer requests to produce or delete personal information, and guard against any perceived discrimination against consumers that exercise rights under the CCPA.

The company will also need to review contracts with third-party service providers and vendors to ensure it can comply with the CCPA. For example, if a third-party cloud service will be storing personal information, the company will want to verify that its contract allows it to assemble and produce that information within statutory deadlines if requested by a consumer.

At least you have some time!

The good news is that the CCPA includes a grace period until July 1, 2020 before the California Attorney General can bring enforcement actions. Thus, Arizona businesses that may have ignored the quirky California privacy law to this point have a window to bring their operations into compliance. However, Arizona companies that may need to comply with the CCPA should consult with counsel as soon as possible to begin the process. The attorneys at Ryley Carlock & Applewhite are ready to help you analyze your risk and comply with the CCPA.


Copyright © 2020 Ryley Carlock & Applewhite. A Professional Association. All Rights Reserved.

Learn more about the California Consumer Privacy Act (CCPA) on the National Law Review Communications, Media & Internet Law page.

CISA Releases “Cyber Essentials” to Assist Small Businesses Updated

On November 6, 2019, the Department of Homeland Security (“DHS”), Cybersecurity & Infrastructure Security Agency (“CISA”) released its Cyber Essentials guide. Consistent with the NIST Cybersecurity Framework, these Cyber Essentials provide “a starting point to cyber readiness,” and are specifically aimed at small businesses and local government agencies that may have fewer resources to dedicate to cybersecurity.

The guide suggests a holistic approach for managing cyber risks, and is broken down into six “Essential Elements of a Culture of Cyber Readiness,” specifically:

  • Yourself – driving awareness, strategy, and investment to build and sustain a culture of cybersecurity.
  • Your Staff – developing awareness and vigilance because your staff is often the first line of defense.
  • Your Systems – protecting your information and critical assets and applications.
  • Your Surroundings – limiting access to your digital environment.
  • Your Data – having a contingency plan to recover systems, networks, and data from trusted backups.
  • Your Actions Under Stress – planning and conducting drills for cyberattacks to bolster readiness to respond, limit damage, and restore operations in the event of an attack.

The final section of the guide provides a list of steps that small businesses can take immediately to increase organizational preparedness against cyber risks. These include backing up data (automatically and continuously), implementing multi-factor authentication (particularly for privileged, administrative, and remote access users), enabling automatic updates, and deploying patches quickly.

CISA’s Cyber Essentials guide is just the most recent example of a user-friendly resource aimed at assisting small businesses seeking lower-cost cybersecurity solutions. Recognizing that investing in cybersecurity may be difficult for some small businesses, Government agencies are making an effort to help small businesses understand the importance of cybersecurity.

For example, the U.S. Small Business Administration (“SBA”) has a page dedicated to providing information and resources for small business cybersecurity. It outlines common threats, risk assessment, and cybersecurity best practices. It also provides a list of upcoming training and events related to small business cybersecurity. Other entities, including the National Institute of Standards and Technology, the Federal Trade Commission, and the Federal Communications Commission also provide similar resources specifically tailored to small businesses.

The main takeaway here is that all organizations – regardless of size or resources – should take basic steps to improve their cybersecurity resilience.


Copyright © 2019, Sheppard Mullin Richter & Hampton LLP.

ARTICLE BY Jonathan E. Meyer, Townsend L. Bourne and Nikole Snyder a Law Clerk in Sheppard, Mullin, Richter & Hampton LLP’s Washington, D.C. office.

ADA Website Litigation Likely to Increase

There has been considerable confusion amongst business owners as to the requirements of the Americans with Disabilities Act (ADA) as it relates to websites. The ADA requires, among other things, that places of “public accommodation” remove barriers to access for people with disabilities. This law has long been understood to apply to brick-and-mortar establishments, such as restaurants, retail stores, and hotels, but recent court decisions have held that the ADA applies to the websites and mobile applications of businesses offering goods and services online.

The Department of Justice (DOJ), which is responsible for establishing regulations pursuant to the ADA, has thus far failed to issue any guidance, regulations, or technical standards for online platforms, resulting in uncertainty for many business owners. Many have looked to the case of Robles v. Domino’s Pizza, LLC   for potential guidance. Robles was filed by a blind man who claimed that he could not access the Domino’s website and mobile app with his screen-reading software. The District Court dismissed the case on the basis that, although the ADA applied to the website and app, the DOJ’s failure to provide guidance as to the ADA’s application to websites violated Domino’s due process rights. The Ninth Circuit reversed this ruling, and on October 7, 2019, the U.S. Supreme Court denied a petition by Domino’s Pizza asking the Court to review the Ninth Circuit’s decision.

The Supreme Court’s refusal to review the Ninth Circuit decision maintains the uncertainty in what will no doubt be an expanding field of litigation. Business owners should expect to see an increase in ADA website litigation, and should take steps to ensure that their websites and mobile apps are accessible to disabled users.

 


© 2010-2019 Allen Matkins Leck Gamble Mallory & Natsis LLP

More website regulation on the National Law Review Internet, Communications & Media law page.

CPSC Staff Addresses IoT 2018 Hearing Feedback, IoT Project Plans in New Report

Connected products can make the world a safer place: electronic sensors in the home can detect problems and send smartphone notifications to the homeowner; smart alert devices can notify family members or home help companies that an elderly person has fallen and needs assistance. But with over 64 billion connected products in the marketplace, there is a concern that connected devices could introduce hazards that might lead to a risk of injury due to problems with software updates or customization, faulty connections, and even consumer modifications.

As the body charged with overseeing consumer product safety in the U.S., over the last few years, the Consumer Product Safety Commission (CPSC) has shown an increasing interest in defining its role with regard to connected products. In May 2018, the CPSC held a public hearing on IoT, obtaining feedback from a range of stakeholders on potential risks of connected consumer products and the agency’s role. In late September, CPSC staff submitted to the Commission a status report outlining the CPSC’s work on consumer product IoT issues since the public hearing. The report also outlines how CPSC staff understands the agency’s role, which is safeguarding consumers from potential physical product risks, as well as how its work intersects with the jurisdiction of other agencies as they oversee connected products.

The report notes that this is an ongoing process, stating that CPSC staff is working on “how to define consumer product safety in terms of the IoT, the intersection of, and interdependencies among, consumer product safety, data security and privacy, and how our traditional risk management approaches apply to connected products.” The report acknowledges that privacy and data security are not within CPSC’s jurisdiction, but noted that at least one participant in CPSC’s 2018 hearing warned that “CPSC should pay attention to certain cybersecurity threats that create opportunities for physical harm, a risk not previously considered, and resist creating any prescriptive rules for IoT devices.”

To increase institutional knowledge of IoT benefits and challenges, CPSC has dedicated resources to develop its staff’s expertise. CPSC has also participated in developing voluntary standards, has taken a leadership role in establishing an interagency IoT working group, and has been developing its capability to simulate home networks at its laboratory.

The staff report outlines three ongoing internal projects relating to IoT. The first involves developing a methodology for assessing safety-related implications arising out of software and firmware updates to connected products. This project is at what CPSC views as the intersection of product safety and data security and potential “hazardization” of connected products as a result of data vulnerabilities. CPSC is also looking at connected heating appliances and the risks associated with their remote activation. Finally, CPSC is studying smart toys “in an effort to identify physical safety hazards.” It is surprising that CPSC staff would dedicate resources to toys as opposed to other products, like in-home safety devices, since the physical safety of toys is strictly regulated by the mandatory toy safety standard, ASTM F-963. The likelihood of physical hazardization of toys is far lower than, for example, connected home security devices and sensors. In those categories, connectivity, and thus security breaches that affect the operation of those devices, may be directly related to both safety risks and advantages. Indeed, home safety devices is a category where we have actually seen CPSC recall activity.

The report notes that CSPC is engaging in product safety assessments of connected& shared e-scooters. This is likely in response to reports of e-scooters that were vulnerable to hacking. The emerging hazards of micro-mobility devices such as shared e-scooters are also a focus of CPSC’s Operating Plan for Fiscal Year 2020 and represent another product category that appears to be more vulnerable to hazardization than connected toys.

CPSC staff intended to develop a best practices guide for industry and consumers on connected products, which was an enumerated project in the proposed Operating Plan for Fiscal Year 2020. However, an amendment introduced by Commissioner Feldman focuses CPSC’s resources on IoT intergovernmental work instead. Given the report’s acknowledgment that the agency is still working to develop staff expertise in IoT, attempting to create such a guide appears premature at this juncture.

The sharp increase in the number of connected devices in the market means it is necessary and appropriate for CPSC to continue to build expertise on IoT issues, even though very few examples of actual product safety hazards attributable to some type of connectivity failures exist. It would be useful for CPSC to focus its efforts and resources on product categories that pose a higher potential risk to the physical safety of consumers through hazardization or failure as a result of connectivity, without overstating potential risks. It is encouraging that through the intergovernmental initiatives a variety of federal agencies are working collaboratively to better understand the various consumer protection issues potentially raised by connected products that fit within their respective jurisdictions.


© 2019 Keller and Heckman LLP

For more CSPC regulation, see the National Law Review Consumer Protection law page.

LinkedIn Petitions Circuit Court for En Banc Review of hiQ Scraping Decision

On October 11, 2019, LinkedIn Corp. (“LinkedIn”) filed a petition for rehearing en banc of the Ninth Circuit’s blockbuster decision in hiQ Labs, Inc. v. LinkedIn Corp., No. 17-16783 (9th Cir. Sept. 9, 2019). The crucial question before the original panel concerned the scope of Computer Fraud and Abuse Act (CFAA) liability to unwanted web scraping of publicly available social media profile data and whether once hiQ Labs, Inc. (“hiQ”), a data analytics firm, received LinkedIn’s cease-and-desist letter demanding it stop scraping public profiles, any further scraping of such data was “without authorization” within the meaning of the CFAA. The appeals court affirmed the lower court’s order granting a preliminary injunction barring LinkedIn from blocking hiQ from accessing and scraping publicly available LinkedIn member profiles to create competing business analytic products. Most notably, the Ninth Circuit held that hiQ had shown a likelihood of success on the merits in its claim that when a computer network generally permits public access to its data, a user’s accessing that publicly available data will not constitute access “without authorization” under the CFAA.

In its petition for en banc rehearing, LinkedIn advanced several arguments, including:

  • The hiQ decision conflicts with the Ninth Circuit Power Ventures precedent, where the appeals court held that a commercial entity that accesses a website after permission has been explicitly revoked can, under certain circumstances, be civilly liable under the CFAA. Power Ventures involved Facebook user data protected by password (that users initially allowed a data aggregator permission to access). LinkedIn argued that the hiQ court’s logic in distinguishing Power Ventures was flawed and that the manner in which a user classifies his or her profile data should have no bearing on a website owner’s right to protect its physical servers from trespass.

“Power Ventures thus holds that computer owners can deny authorization to access their physical servers within the meaning of the CFAA, even when users have authorized access to data stored on the owner’s servers. […] Nothing about a data owner’s decision to place her data on a website changes LinkedIn’s independent right to regulate who can access its website servers.”

  • The language of the CFAA should not be read to allow for “authorization” to be assumed (and unable to be revoked) for publicly available website data, either under Ninth Circuit precedent or under the CFAA-related case law of other circuits.

“Nothing in the CFAA’s text or the definition of ‘authorization’ that the panel employed—“[o]fficial permission to do something; sanction or warrant,” suggests that enabling websites to be publicly viewable is not ‘authorization’ that can be revoked.”

  • The privacy interests enunciated by LinkedIn on behalf of its users is “of exceptional importance,” and the court discounted the fact that hiQ is “unaccountable” and has no contractual relationship with LinkedIn users, such that hiQ could conceivably share the scraped data or aggregate it with other data.

“Instead of recognizing that LinkedIn members share their information on LinkedIn with the expectation that it will be viewed by a particular audience (human beings) in a particular way (by visiting their pages)—and that it will be subject to LinkedIn’s sophisticated technical measures designed to block automated requests—the panel assumed that LinkedIn members expect that their data will be ‘accessed by others, including for commercial purposes,’ even purposes antithetical to their privacy setting selections. That conclusion is fundamentally wrong.

Both website operators and open internet advocates will be watching closely to see if the full Ninth Circuit decides to rehear the appeal, given the importance of the CFAA issue and the prevalence of data scraping of publicly available website content. We will keep a close watch on developments.


© 2019 Proskauer Rose LLP.

Can We Really Forget?

I expected this post would turn out differently.

I had intended to commend the European Court of Justice for placing sensible limits on the extraterritorial enforcement of the EU’s Right to be Forgotten. They did, albeit in a limited way,[1] and it was a good decision. There.  I did it. In 154 words.

Now for the remaining 1400 or so words.

But reading the decision pushes me back into frustration at the entire Right to be Forgotten regime and its illogical and destructive basis. The fact that a court recognizes the clear fact that the EU cannot (generally) force foreign companies to violate the laws of their own countries in internet sites that are intended for use within those countries (and NOT the EU), does not come close to offsetting the logical, practical and societal problems with the way the EU perceives and enforces the Right to be Forgotten.

As a lawyer, with all decisions grounded in the U.S. Constitution, I am comfortable with the First Amendment’s protection of Freedom of Speech – that nearly any truthful utterance or publication is inviolate, and that the foundation of our political and social system depends on open exposure of facts to sunlight. Intentionally shoving those true facts into the dark is wrong in our system and openness will be protected by U.S. courts.

Believe it or not, the European Union also has such a concept at the core of its foundation too. Article 10 of the European Convention on Human Rights states that:

“Everyone has the right to freedom of expression. This right shall include freedom to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers.”

So we have the same values, right? In both jurisdictions the right to impart information can be exercised without interference by public authority.  Not so fast.  The EU contains a litany of restrictions on this right, including a limitation of your right to free speech by the policy to protect the reputation of others.

This seems like a complete evisceration of a right to open communication if a court can force obfuscation of facts just to protect someone’s reputation.  Does this person deserve a bad reputation? Has he or she committed a crime, failed to pay his or her debts, harmed animals or children, stalked an ex-lover, or violated an oath of office, marriage, priesthood or citizenship? It doesn’t much matter in the EU. The right of that person to hide his/her bad or dangerous behavior outweighs both the allegedly fundamental right to freedom to impart true information AND the public’s right to protect itself from someone who has proven himself/herself to be a risk to the community.

So how does this tension play out over the internet? In the EU, it is law that Google and other search engines must remove links to true facts about any wrongdoer who feels his/her reputation may be tarnished by the discovery of the truth about that person’s behavior. Get into a bar fight?  Don’t worry, the EU will put the entire force of law behind your request to wipe that off your record. Stiff your painting contractors for tens of thousands of Euros despite their good performance? Don’t worry, the EU will make sure nobody can find out . Get fired, removed from office or defrocked for dishonesty? Don’t worry, the EU has your back.

And that undercutting of speech rights has now been codified in Article 17 of Regulation 2016/679, the Right to be Forgotten.

And how does this new decision affect the rule? In the past couple weeks, the Grand Chamber of the EU Court of Justice issued an opinion limiting the extraterritorial reach of the Right to be Forgotten. (Google vs CNIL, Case C‑507/17) The decision confirms that search engines must remove links to certain embarrassing instances of true reporting, but must only do so on the versions of the search engine that are intentionally servicing the EU, and not necessarily in versions of the search engines for non-EU jurisdictions.

The problems with appointing Google to be an extrajudicial magistrate enforcing vague EU-granted rights under a highly ambiguous set of standards and then fining them when you don’t like a decision you forced them to make, deserve a separate post.

Why did we even need this decision? Because the French data privacy protection agency, known as CNIL, fined Google for not removing presumably true data from non-EU search results concerning, as Reuters described, “a satirical photomontage of a female politician, an article referring to someone as a public relations officer of the Church of Scientology, the placing under investigation of a male politician and the conviction of someone for sexual assaults against minors.”  So, to be clear, while the official French agency believes it should enforce a right for people to obscure that they have been convicted of sexual assault against children from the whole world, the Grand Chamber of the European Court of Justice believes that the people convicted child sexual assault should be protected in their right to obscure these facts only from people in Europe. This is progress.

Of course, in the U.S., politicians and other public figures, under investigation or subject to satire or people convicted of sexual assault against children do not have a right to protect their reputations by forcing Google to remove links to public records or stories in news outlets. We believe both that society is better when facts are allowed to be reported and disseminated and that society is protected by reporting on formal allegations against public figures or criminal convictions of private ones.

I am glad that the EU Court of Justice is willing to restrict rules to remain within its jurisdiction where they openly conflict with the basic laws of other jurisdictions. The Court sensibly held,

“The idea of worldwide de-referencing may seem appealing on the ground that it is radical, clear, simple and effective. Nonetheless, I do not find that solution convincing, because it takes into account only one side of the coin, namely the protection of a private person’s data.[2] . . . [T]he operator of a search engine is not required, when granting a request for de-referencing, to operate that de-referencing on all the domain names of its search engine in such a way that the links at issue no longer appear, regardless of the place from which the search on the basis of the requester’s name is carried out.”

Any other decision would be wildly overreaching. Believe me, every country in the EU would be howling in protest if the US decided that its views of personal privacy must be enforced in Europe by European companies due to operations aimed only to affect Europe. It should work both ways. So this was a well-reasoned limitation.

But I just cannot bring myself to be complimentary of a regime that I find so repugnant – where nearly any bad action can be swept under the rug in the name of protecting a person’s reputation.

As I have written in books and articles in the past, government protection of personal privacy is crucial for the clean and correct operation of a democracy.  However, privacy is also the obvious refuge of scoundrels – people prefer to keep the bad things they do private. Who wouldn’t? But one can go overboard protecting this right, and it feels like the EU has institutionalized its leap overboard.

I would rather err on the side of sunshine, giving up some privacy in the service of revealing the truth, than err on the side of darkness, allowing bad deeds to be obscured so that those who commit them can maintain their reputations.  Clearly, the EU doesn’t agree with me.


[1] The Court, in this case, wrote, “The issues at stake therefore do not require that the provisions of Directive 95/46 be applied outside the territory of the European Union. That does not mean, however, that EU law can never require a search engine such as Google to take action at worldwide level. I do not exclude the possibility that there may be situations in which the interest of the European Union requires the application of the provisions of Directive 95/46 beyond the territory of the European Union; but in a situation such as that of the present case, there is no reason to apply the provisions of Directive 95/46 in such a way.”

[2] EU Court of Justice case C-136/17, which states, “While the data subject’s rights [to privacy] override, as a general rule, the freedom of information of internet users, that balance may, however, depend, in specific cases, on the nature of the information in question and its sensitivity for the data subject’s private life and on the interest of the public in having that information. . . .”

 


Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.

For more EU’s GDPR enforcement, see the National Law Review Communications, Media & Internet law page.

Vimeo Hit with Class Action for Alleged Violations of Biometric Law

Vimeo, Inc. was sued last week in a class action case alleging that it violated the Illinois Biometric Information Privacy Act by “collecting, storing and using Plaintiff’s and other similarly situated individuals’ biometric identifiers and biometric information…without informed written consent.”

According to the Complaint, Vimeo “has created, collected and stored, in conjunction with its cloud-based Magisto service, thousands of “face templates” (or “face prints”)—highly detailed geometric maps of the face—from thousands of Magisto users.” The suit alleges that Vimeo creates these templates using facial recognition technology and “[E]ach face template that Vimeo extracts is unique to a particular individual, in the same way that a fingerprint or voiceprint uniquely identifies one and only one person.” The plaintiffs are trying to liken an image captured by facial recognition technology to a fingerprint by calling it a “faceprint.” Very creative in the wake of mixed reactions to the use of facial recognition technology in the Facebook and Shutterfly cases.

The suit alleges “users of Magisto upload millions of videos and/or photos per day, making videos and photographs a vital part of the Magisto experience….Users can download and connect any mobile device to Magistoto upload and access videos and photos to produce and edit their own videos….Unbeknownst to the average consumer, and in direct violation of…BIPA, Plaintiff…believes that Magisto’s facial recognition technology scans each and every video and photo uploaded to Magisto for faces, extracts geometric data relating to the unique points and contours (i.e., biometric identifiers) of each face, and then uses that data to create and store a template of each face—all without ever informing anyone of this practice.”

The suit further alleges that when a user uploads a photo, the Magisto service creates a template for each face depicted in the photo, and compares that face with others in its face database to see if there is a match. According to the Complaint, the templates are also able to recognize gender, age and location and are able to collect biometric information from non-users. All of this is done without consent of the individuals, and in alleged violation of BIPA.

Although we previously have seen some facial recognition cases alleging violation of BIPA, and there are numerous cases alleging violation of BIPA for collection of fingerprints in the employment setting, this case is a little different from those, and it will be interesting to watch.



Copyright © 2019 Robinson & Cole LLP. All rights reserved.
For more on biometrics & privacy see the National Law Review Communications, Media & Internet law page.

WIPO Launches UDRP for .CN and .中国 ccTLD

The World Intellectual Property Organization (WIPO) launched a Uniform Domain-Name Dispute-Resolution Policy (UDRP) for .CN and .中国 (China) country code Top-Level Domain (ccTLD), the first non-Chinese entity to do so. Previously, the China International Economic and Trade Arbitration Commission Online Dispute Solution Center (CIETAC ODRC) or the Hong Kong International Arbitration Center (HKIAC) were authorized by the China Internet Network Information Center (CNNIC) to handle domain name disputes for these domains. The .CN and .中国ccTLD is among the largest in the world with over 22 million registered domain names.

The WIPO UDRP for .CN and .中国 ccTLD is only applicable to .CN and .中国domain names that have been registered for less than three years.  In contrast to the conventional UDRP, the Chinese UDRP applies to domain names that are identical or confusingly similar, not only to a mark, but to any “name” in which the complainant has civil rights or interests.

The complainant must prove that either registration or use of the disputed domain name is in bad faith, but not both as in the traditional UDRP.  Examples of bath faith provided by WIPO include:

  • The purpose for registering or acquiring the domain name is to sell, rent or otherwise transfer the domain name registration to the complainant who is the owner of the name or mark or to a competitor of that complainant, and to obtain unjustified benefits;
  • The disputed domain name holder, on many occasions, registers domain Names in order to prevent owners of the names or marks from reflecting the names or the marks in corresponding domain names;
  • The disputed domain name holder has registered or acquired the domain name for the purpose of damaging the Complainant’s reputation, disrupting the Complainant’s normal business or creating confusion with the Complainant’s name or mark so as to mislead the public;
  • Other circumstances which may prove the bad faith.

The language of proceedings will be in Chinese unless otherwise agreed by the parties or determined by the Panel.  More information is available at WIPO’s site.


© 2019 Schwegman, Lundberg & Woessner, P.A. All Rights Reserved.

For more on internet IP concerns, see the National Law Review Intellectual Property law page.

An Introduction to Digital Marketing Analytics for Law Firms

Analytics is the science behind the art of digital marketing and is an invaluable part of any law firm’s marketing strategy.  Digital analytics can provide a wealth of information that not only tells you how effective your digital channels are, but if analyzed properly, can provide fact-based information that can help identify prospect behavior and needs.  Legal marketers must be familiar with the proper web analytics tools to be able to capture and interpret the performance of their website but must also be able to go beyond their website to properly assess the performance and return of their digital marketing efforts. In this article we discuss  how law firms can merge web analytics and digital marketing analytics to improve the overall performance of their online marketing efforts.

Why Marketing Analytics Matter

To benefit from your marketing analytics, you must understand why they are important.  Analytics enables you to measure, analyze, and manage the performance of your digital marketing strategy in real time.  Legal marketers can leverage analytics to pinpoint problems and uncover areas of opportunity and potential growth. Knowing which digital marketing analytics are important to your bottom line will help you stay focused on the metrics that matter.

Why You Need to Measure Your Efforts

Your analytics should always include a measurement of your efforts. This is for one simple reason, sustainability. Essentially, you want to ensure that your marketing efforts are contributing to the growth of your firm and producing a positive return on investment (ROI).  By learning to create custom reports and glean actionable insights that actually matter to your firm, you can begin to make data-driven decisions and strengthen your marketing strategy.  At any point during your campaign, you can apply the data to tweak or remove tactics that are not producing the desired return.  By effectively leveraging data intelligence, you will build a more insightful and effective marketing plan.  The good news is that the process of tracking digital marketing has become very straightforward.  Your chosen marketing tool probably has analytics built-in, helping you measure your efforts.

Web Analytics Can Measure Those Efforts

Supplementing the built-in analytics with your marketing analytics is simple. Numerous websites offer web analytics tools, with Google Analytics being among the most famous and most popular. The most attractive reason to use Google Analytics is that it’s free!  Google Analytics will show you the number of views and visits to your website, where those visitors come from and what pages they are looking at. This information enables you to easily see how your website is performing in real-time. Google Analytics will even allow you to organize and track website data across all devices – smart phones, tablets, computers and even smart TVs.  To learn how to create a custom Google Analytics report, checkout this blog post.

Using KPIs

Evaluation of your efforts will come down to KPIs or Key Performance Indicators. You choose these criteria as measurements to demonstrate how effective your marketing strategies are at achieving your key business objectives. As such, they let you see if you are meeting your business goals, whether they are brand awareness or lead generation. Your marketing plan should include the KPIs that most closely meet your specific business objectives. Consider the following KPIs, as they are the most important for optimizing your campaigns:

  • Qualifies leads: This KPI lets you confirm that your campaign is generating qualified leads for your law firm
  • Cost Per Conversion: This metric allows you to evaluate how much you pay for each conversion to help you confirm that your marketing expenses are worth the expenditure.
  • Online Marketing Return on Investment: It should go without saying that your return on investment will let you know whether your efforts are worth it.
  • Form Conversion Rates: Form conversion rates help you determine how well your content marketing drives people to your law firm’s website and encourages them to take action. You can even use the combination of data from the forms and the form conversion rates to develop strategies that boost conversions.
  • Reach and share of voice on Social Media: Don’t forget to consider the conversions that you get from social media as one of your KPIs. This lets you determine which social media network is the best for you so that you can concentrate your efforts for the best ROI.

Takeaway

As a legal marketer, you must understand the impact of your marketing efforts through the effective use of digital marketing analytics and web analytics.  Applying data intelligence is invaluable to the process of improving and optimizing your marketing strategy.


© Copyright 2019 Good2BSocial

ARTICLE BY Talia Schwartz of Good2bSocial.
For more on marketing for law firms, see the Law Office Management page on the National Law Review.

Recent COPPA Settlements Offer Compliance Reminders

The recently announced FTC settlement with YouTube and its parent company, as well as the 2018 settlement between the New York Office of the Attorney General and Oath, have set a new bar when it comes to COPPA compliance.

The settlements offer numerous takeaways, including reminders to those that use persistent identifiers to track children online and deliver them targeted ads.  These takeaways include, but are not limited to the following.

FTC CID attorney Joseph Simons stated that “YouTube touted its popularity with children to prospective corporate clients … yet when it came to complying with COPPA, the company refused to acknowledge that portions of its platform were clearly directed to kids.”

First, under COPPA, a child-directed website or online service – or a site that has actual knowledge it’s collecting or maintaining personal information from a child – must give clear notice on its site of “what information it collects from children, how it uses such information and its disclosure practices for such information.”

Second, the website or service must give direct notice to parents of their practices “with regard to the collection, use, or disclosure of personal information from children.”

Third, prior to collecting personal information from children under 13, COPPA-covered companies must get verifiable parental consent.

COPPA’s definition of “personal information” specifically includes persistent identifiers used for behavioral advertising.  It is critical to note that third-party platforms are subject to COPPA when they have actual knowledge they are collecting personal information from users of a child-directed website.

In March 2019, the FTC handed down what, then, was the largest civil penalty ever for violations of COPPA following allegations that Musical.ly knew many of its users were children and still failed to seek parental consent.  There, the FTC charged that Musical.ly failed to provide notice on their website of the information they collect online from children, how they use it and their disclosure practices; failed to provide direct notice to parents; failed to obtain consent from parents before collecting personal information from children; failed to honor parents’ requests to delete personal information collected from children; and retained personal information for longer than reasonably necessary.

Content creators must know COPPA’s requirements.

If a platform hosting third-party content knows that content is directed to children, it is unlawful to collect personal information from viewers without getting verifiable parental consent.

While it may be fine for most commercial websites geared to a general audience to include a corner for children, it that portion of the website collects information from users, COPPA obligations are triggered.

Comprehensive COPPA policies and procedures to protect children’s privacy are a good idea.  As are competent oversight, COPPA training for relevant personnel, the identification of risks that could result in violations of COPPA, the design and implementation of reasonable controls to address the identified risks, the regular monitoring of the effectiveness of those controls, and the development and use of reasonable steps to select and retain service providers that can comply with COPPA.

The FTC and the New York Attorney General are serious about COPPA enforcement.  Companies should exercise caution with respect to such data collection practices.



© 2019 Hinch Newman LLP