SEC Sanctions Operator of Unregistered Virtual Currency Exchanges

Katten Muchin Law Firm

On December 8, the Securities and Exchange Commission sanctioned a computer programmer for operating two online exchanges that traded securities using virtual currencies without registering them as broker-dealers or stock exchanges. The programmer, Ethan Burnside, operated the two exchanges through his company, BTC Trading Corp., from August 2012 to October 2013. Account holders were able to purchase securities in virtual currency businesses using bitcoins on BTC Virtual Stock Exchange and using litecoins on LTC-Global Virtual Stock Exchange. The exchanges were not registered as broker-dealers but solicited the public to open accounts and trade securities. The exchanges also were not registered as stock exchanges but enlisted issuers to offer securities to the public for purchase and sale. Burnside also offered shares in LTC-Global Virtual Stock Exchange itself, as well as interests in a separate Litecoin mining venture, LTC-Mining, in exchange for virtual currencies. The SEC charged Burnside with willful violations of Sections 5(a) and 5(c) of the Securities Act of 1933 and Burnside and BTC Trading Corp. with willful violations of Sections 5 and 15(a) of the Securities Exchange Act of 1934. Burnside cooperated with the SEC’s investigation and settled, paying more than $68,000 in profits plus interest and a penalty. The SEC also barred Burnside from the securities industry.

The action may indicate that the SEC is taking a closer look at decentralized platforms for trading virtual currency using cryptocurrency technology, but the SEC has neither confirmed nor denied such speculation. In recent months, the SEC has reportedly sent voluntary information requests to companies and online “crypto-equity exchanges” offering equity and related interests denominated in virtual currency and websites offering digital tokens for programming platforms. A discussion of the SEC’s voluntary information sweep is available here.

Click here to read the SEC Press Release and here to read the SEC order.

ARTICLE BY

OF

QVC Sues Shopping App for Web Scraping That Allegedly Triggered Site Outage

Proskauer Law firm

Operators of public-facing websites are typically concerned about the unauthorized, technology-based extraction of large volumes of information from their sites, often by competitors or others in related businesses. The practice, usually referred to as screen scraping, web harvesting, crawling or spidering, has been the subject of many questions and a fair amount of litigation over the last decade.

However, despite the litigation in this area, the state of the law on this issue remains somewhat unsettled: neither scrapers looking to access data on public-facing websites nor website operators seeking remedies against scrapers that violate their posted terms of use have very concrete answers as to what is permissible and what is not.

In the latest scraping dispute, the e-commerce site QVC objected to the Pinterest-like shopping aggregator Resultly’s scraping of QVC’s site for real-time pricing data.  In its complaint, QVC claimed that Resultly “excessively crawled” QVC’s retail site (purpotedly sending search requests to QVC’s website at rates ranging from 200-300 requests per minute to up to 36,000 requests per minute) causing a crash that wasn’t resolved for two days, resulting in lost sales.  (See QVC Inc. v. Resultly LLC, No. 14-06714 (E.D. Pa. filed Nov. 24, 2014)). The complaint alleges that the defendant disguised its web crawler to mask its source IP address and thus prevented QVC technicians from identifying the source of the requests and quickly repairing the problem.  QVC brought some of the causes of action often alleged in this type of case, including violations of the Computer Fraud and Abuse Act (CFAA), breach of contract (QVC’s website terms of use), unjust enrichment, tortious interference with prospective economic advantage, conversion and negligence and breach of contract.  Of these and other causes of action typically alleged in these situations, the breach of contract claim is often the clearest source of a remedy.

This case is a particularly interesting scraping case because QVC is seeking damages for the unavailability of their website, which QVC alleges to have been caused by Resultly.  This is an unusal theory of recovery in these types of cases.   For example,  this past summer, LinkedIn settled a scraping dispute with Robocog, the operator of HiringSolved, a “people aggregator” employee recruting service, over claims that the service employed bots to register false accounts in order to scrape LinkedIn member profile data and thereafter post it to  its service without authorization from Linkedin or its members.  LinkedIn brought various claims under the DMCA and the CFAA, as well as state law claims of trespass and breach of contract, but did not allege that their service was unavailable due to the defendant’s activities.  The parties settled the matter, with Robocog agreeing to pay $40,000, cease crawling LinkedIn’s site and destroy all LinkedIn member data it had collected.  (LinkedIn Corp. v. Robocog Inc., No. 14-00068 (N.D. Cal.  Proposed Final Judgment filed July 11, 2014).

However, in one of the early, yet still leading cases on scraping, eBay, Inc. v. Bidder’s Edge, Inc., 100 F. Supp. 2d 1058 (N.D. Cal. 2000), the district court touched on the foreseeable harm that could result from screen scraping activities, at least when taken in the aggregate.  In the case, the defendant Bidder’s Edge operated an auction aggregation site and accessed eBay’s site about 100,000 times per day, accounting for between 1 and 2 percent of the information requests received by eBay and a slightly smaller percentage of the data transferred by eBay. The court rejected eBay’s claim that it was entitled to injunctive relief because of the defendant’s unauthorized presence alone, or because of the incremental cost the defendant had imposed on operation of the eBay site, but found sufficient proof of threatened harm in the potential for others to imitate the defendant’s activity.

It remains to be seen if the parties will reach a resolution or whether the court will have a chance to interpret QVC’s claims, and whether QVC can provide sufficient evidence of the causation between Resultly’s activities and the website outage.

Companies concerned about scraping should make sure that their website terms of use are clear about what is and isn’t permitted, and that the terms are positioned on the site to support their enforceability. In addition, website owners should ensure they are using “robots.txt,” crawl delays and other technical means to communicate their intentions regarding scraping.  Companies that are interested in scraping should evaluate the terms at issue and other circumstances to understand the limitations in this area.

OF

Social Media Marketing for Lawyers: What It Can Do for You, How to Do It Right

The Rainmaker Institute

Many attorneys I talk with want to know if social media will deliver real value for the investment in time and effort that it takes to develop and implement a social media marketing program.

Social Media Marketing

Here is what I tell them:

Social media will help you build trust, but it will not make a “bad” reputation better. Social media is a meritocracy – if you’re good, people will know it. Conversely, a bad experience will also get talked about. Building trust is crucial for attorneys, and social media helps you build trust by providing a robust platform for sharing your particular insights and knowledge. Once people trust that, they will use you and recommend you to others.

Social media will get you leads, but it will not turn them into paying clients. People who follow you on Twitter, are a fan of you on Facebook or interact with you in any way on a social network have indicated an interest in what you have to say. These are leads. To capitalize on them and turn them into paying clients, however, requires effort on your part in following up.

Social media will give you visibility, but it will not replace a good client experience. Social media is a 365/24/7 world, allowing you to engage with prospects at any time, and they with you. You must be vigilant about responding to posts and questions the same way you would in responding to a prospect that calls or emails you. Every point of contact is an opportunity to make a great impression.

Social media is the fastest way to build your sphere of influence, but it won’t happen overnight. Your sphere of influence is defined as how many people know (1) who you are, (2) who you help, and (3) why you are different.  If you only have 20 people who know enough about you to send you the right referrals, then you are severely limited in how much you will be able to grow your practice.   Social media is a long-term play, and you need to commit to spending the time and money (either yours or hiring someone else) to achieve success.

ARTICLE BY

OF

DOJ Settlement Suggests Push to Expand ADA Coverage to All Websites and Apps

Morgan Lewis logo

The chance of future DOJ investigations justifies companies’ reviews of customer-oriented websites and apps for accessibility.

As consumers continue to use the Internet and their smartphones for their shopping in astonishing numbers, especially on this Cyber Monday, a recent Department of Justice (DOJ) settlement agreement raises questions and potential serious implications for any company with customer-oriented websites or mobile applications. The settlement agreement requires Ahold USA., Inc. and Peapod, LLC (Peapod) to make the www.peapod.com website and Peapod’s mobile applications accessible to the disabled, including persons with vision, hearing, and manual impairments. The settlement agreement demonstrates that the DOJ is reviewing and/or monitoring websites and mobile apps for accessibility and remains aggressive in its push to extend the requirements of Title III of the Americans with Disabilities Act (ADA) to all websites and mobile apps—even when the sites are unrelated to actual physical places of public accommodation. According to the settlement agreement, the DOJ concluded that www.peapod.com was inaccessible to the disabled after initiating a “compliance review” authorized by Title III and its implementing regulations.[1] Peapod, however, contested the DOJ’s conclusion that www.peapod.com and Peapod’s mobile apps were not ADA compliant.

The settlement agreement is particularly noteworthy because www.peapod.com is a purely online grocery delivery service, unrelated to a “brick and mortar” physical place of public accommodation. Most courts considering application of the ADA to websites require a website to have a “nexus” to a physical place.[2] In the past, the DOJ has required websites and mobile apps to be accessible—for example, in a March 2014 consent decree with H&R Block. However, unlike the H&R Block consent decree, which involved a website and mobile apps with a nexus to physical places, the Peapod settlement agreement requires that a website and apps with no nexus to a physical place be made accessible to the disabled. The Peapod settlement agreement therefore shows that the DOJ’s Notice of Proposed Rulemaking (NPRM), which is expected in March 2015, may require—in the words of the Abstract for the DOJ’s NPRM—the websites and apps of “private entities of all types,” even “[s]ocial networks and other online meeting places” to comply with the ADA.

The settlement agreement also indicates which standards the DOJ’s regulations eventually may require websites and mobile apps to meet. The settlement agreement requires www.peapod.com and Peapod’s mobile apps to comply with the Web Content Accessibility Guidelines 2.0, Level AA (WCAG 2.0 AA). The DOJ has required compliance with the WCAG 2.0 AA in the past, including in the H&R Block consent decree. The Peapod settlement agreement further requires Peapod to designate a Website Accessibility Coordinator to coordinate compliance with the agreement; adopt a Website and Mobile Application Accessibility Policy; post a notice on its home page on its accessibility policy, which would include a toll-free number for assistance and a solicitation for feedback; annually train website content personnel on conforming Web content and apps to the WCAG 2.0 AA; seek contractual commitments from its vendors to provide conforming content, or (for content not subject to a written contract) seek out content that conforms to the WCAG 2.0 AA; modify bug fix priority policies to include the elimination of bugs that create accessibility barriers; and conduct automated accessibility tests of the website and apps at least once every six months and transmit the results to the government. The settlement agreement, which stays in effect for three years, additionally provides that every 12 months, the Website Accessibility Coordinator must submit a report to the government that details Peapod’s compliance or noncompliance with the agreement. Peapod is not the only entity that will conduct testing under the settlement agreement. At least once annually, individuals with vision, hearing, and manual disabilities will test the usability of the Web pages. Notably, however, the settlement agreement does not impose damages or a civil penalty on Peapod.

There is a chance that the DOJ’s eventual regulations will differ from the standards to which the DOJ requires Peapod to conform. The settlement agreement accounts for that possibility. It states that if the DOJ promulgates final regulations on website accessibility technical standards during the term of the settlement agreement, the parties must meet and confer at either’s request to discuss whether the agreement must be modified to make it consistent with the regulations.


[1]See 42 U.S.C. § 12188(b)(1)(A)(i) (“The Attorney General . . . shall undertake periodic reviews of compliance of covered entities under this subchapter.”); 28 C.F.R. § 36.502(c) (“Where the Attorney General has reason to believe that there may be a violation of this part, he or she may initiate a compliance review.”).

[2]. See, e.g.Nat’l Fed. of the Blind v. Target Corp., 452 F. Supp. 2d 946, 953–56 (N.D. Cal. 2011).

ARTICLE BY

FTC Denies AgeCheq Parental Consent Application But Trumpets General Support for COPPA Common Consent Mechanisms

Covington BUrling Law Firm

The Federal Trade Commission (“FTC”) recently reiterated its support for the use of “common consent” mechanisms that permit multiple operators to use a single system for providing notices and obtaining verifiable consent under the Children’s Online Privacy Protection Act (“COPPA”). COPPA generally requires operators of websites or online services that are directed to children under 13 or that have actual knowledge that they are collecting personal information from children under 13 to provide notice and obtain verifiable parental consent before collecting, using, or disclosing personal information from children under 13.   The FTC’s regulations implementing COPPA (the “COPPA Rule”) do not explicitly address common consent mechanisms, but in the Statement of Basis and Purpose accompanying 2013 revisions to the COPPA Rule, the FTC stated that “nothing forecloses operators from using a common consent mechanism as long as it meets the Rule’s basic notice and consent requirements.”

The FTC’s latest endorsement of common consent mechanisms appeared in a letter explaining why the FTC was denying AgeCheq, Inc.’s application for approval of a common consent method.  The COPPA Rule establishes a voluntary process whereby companies may submit a formal application to have new methods of parental consent considered by the FTC.  The FTC denied AgeCheq’s application because it “incorporates methods already enumerated” in the COPPA Rule: (1) a financial transaction, and (2) a print-and-send form.   The implementation of these approved methods of consent in a common consent mechanism was not enough to merit a separate approval from the FTC .  According to the FTC, the COPPA Rule’s new consent approval process was intended to vet new methods of obtaining verifiable parental consent rather than specificimplementations of approved methods.  While AgeCheq’s application was technically “denied,” the FTC emphasized that AgeCheq and other “[c]ompanies are free to develop common consent mechanisms without applying to the Commission for approval.”  In support of common consent mechanisms, the FTC quoted language from the 2013 Statement of Basis and Purpose and pointed out that at least one COPPA Safe Harbor program already relies on a common consent mechanism.

OF

3 Things You Need To Know About Penguin 3.0

Consultsweb Logo

Penguin is an algorithm from Google that judges the quality of links that you have pointing to your site. Inbound links, sometimes called “backlinks,” to your website are one of the factors that Google’s algorithms use to rank websites in its search results. Google uses the Penguin algorithm (or filter) to punish link profiles that it sees as low-quality (coming from untrustworthy sites) or unnatural.  This is a response to linking practices used in the early days of search marketing, and still employed by some vendors, to show clients’ quick success.

3 Things You Need to Know about Penguin 3.0

In the early days of the Web and SEO, the sheer volume of links (and linking domains) to a website helped its rankings in Google Search results.  Many early SEO companies prospered by buying and selling links, creating directories and setting up other sites for the sheer purpose of creating content and supplying links. This was an exploit used for years by almost every search marketing vendor to gain rankings for their clients.  Since April of 2012, Google has used Penguin to dissuade webmasters from this practice for fear of losing all rankings for their websites.

As Google crawls the Web and finds a link to your site, it places them in a particular database of known links.  If you are bored, you can read through the original paper by Sergei Brin and Larry Page.  Penguin is a separate algorithm that is run periodically to parse through this database of links pointing to your site to check against known spam sites and known manipulative techniques.

In an explanation of Penguin 3.0 for Forbes magazine, Jayson DeMyers says Penguin “rewards sites that have natural, valuable, authoritative, relevant links.” It penalizes sites that have built manipulative links solely for the purpose of increasing rankings, or links that do not appear natural.

Penguin was introduced in April 2012 and updated twice that year with versions 1.1 and 1.2. Penguin 2.0 came out in May 2013 and an October update (2.1) had a fairly wide affect, causing Google ranking changes in about 1 percent of sites.

Penguin 3.0 was released in mid-October in what Google said could be a slow rollout. For some websites, Google said, it could be a few weeks until Penguin 3.0 had an effect, which would be about the time of publishing this article.

Here are the top 3 takeaways from the first days of Penguin 3.0:

1.  Penguin 3.0 may have little impact on quality websites.

Upon its introduction of Penguin 3.0, Google said: “(W)e started rolling out a Penguin refresh affecting fewer than 1 percent of queries in U.S. English search results. This refresh helps sites that have already cleaned up the Web spam signals discovered in the previous Penguin iteration, and demotes sites with newly discovered spam.”

This indicates that Penguin 3.0 will adjust rankings for sites that were adversely affected by earlier versions of the Penguin algorithm, but have since cleaned up offensive links.

But, if your site is still plagued by low-quality links, Penguin 3.0 will have an effect on you, and the impact – “demotes sites with newly discovered spam” – should be in line with earlier iterations of Penguin.  The word to note here (bolded) is that Google’s Pierre Far, called this a refresh, intimating that no new signals were added to this release.

2. Penguin 3.0 means you need to evaluate your links.

To avoid a penalty via Penguin 3.0 or to recover from it if Google has already penalized your site, you need to make sure you are not adding bad links that will hurt your site. You also need to rid your site of bad links pointing to it.

To avoid Penguin penalties, you want to review the type of links pointing to your site.  This can easily be done in Web Master Tools by using their tool to download a list of Sample and Latest links to your site.  Some of the items to look for are:

  • Links from foreign domains (ie. walre.co.pl)
  • Links sites that contain many hyphens (ie. best-personal-injury-lawyers-us.com)
  • Sites that are obviously off-topic (ie. a site about fishing would not normally link to an attorney’s site)
  • Large quantities of links from a particular domain.
  • Large percentages of commercial anchor text in the links pointing to your site.  (If you see anchor text that you would love to rank for in Google, then it is commercial.  Commercial should not make up more than about 10% of your anchor text.)

Removing bad links can be tedious and tricky. First you have to identify them and then you have to figure out how to get them taken down. You can simply contact the site that hosts them (if you can find a contact) and ask for it to be removed. Google also provides a “disavow tool,” by which you can ask Google not to take into account certain links when assessing your site.

But Google’s disavow tool come with two warnings: 1) “You should still make every effort to clean up unnatural links pointing to your site. Simply disavowing them isn’t enough.” And deeper on Google’s Webmaster tools site, 2) “This is an advanced feature and should only be used with caution. If used incorrectly, this feature can potentially harm your site’s performance in Google’s search results.”

3. If you’ve invested in a search marketing campaign, you need to know how your provider is obtaining links to your site.

Building links to your site cannot just be something you expect your marketing provider to do. How it is done can ultimately affect your business, and could adversely impact your overall revenue if your website is penalized by the latest Penguin update or by future Penguin updates.

The biggest takeaway from all Penguin updates is that you need to know how your vendor, your provider, is getting links for you. If they are not working directly with you, then it is likely a scaled process, meaning that their tactics are low quality and potentially harmful.

Instead, your vendor should be working to obtain links from sites that represent highly regarded authorities in your field. In addition to direct outreach to request backlinks, which may have limited cost effectiveness, firms may build links by community outreach, such as sponsoring organizations or public events in the community, which would publicize the firm. Or establishing a scholarship for local students and promoting it to area schools and school systems, which would link to scholarship information on your site. If a member of a law firm teaches at a local college or sits on a corporate or non-profit organization’s board, those organization’s sites may link back to that individual’s profile on your site.

Obtaining high quality backlinks is not always the easiest road, but it is the road well worth traveling, especially in the post-Penguin era.

How to Measure Your Email Marketing Performance

The Rainmaker Institute

Email newsletters have proven to be one of the most effective methods for attorneys to market themselves to prospects, clients and referral sources.  Every year, email marketing service provider MailerMailer provides a report on email marketing metrics across 34 different industries, including Legal.

They have just issued their 2014 report, based on data gathered from 62,000 newsletter campaigns totaling 1.18 billion emails sent between Jan. 1, 2013 and Dec. 31, 2013.  Here are the results — and what should be your new benchmarks — for your law firm newsletter:

Open rate (what percentage of your recipients opened your email):  13.5%

Click rate (what percentage of your recipients clicked on a link in your email)::  1.6%

Click-to-open rate (of the recipients who opened your email, what percentage of them clicked on a link):  11.8%

Bounce rate (the percentage of emails that cannot be delivered):  2.4%

Every email service (Constant Contact, Mail Chimp, iContact, etc.) provides these statistics for each newsletter you send out.  If your newsletters are not delivering at rates that meet or exceed the benchmarks above, you have a problem.

Here’s what you should consider to improve your click, open and bounce rates:

Are your subject lines engaging to entice people to open your email?  Short subject lines — 4 to 15 characters — generate the highest open and click rates.

Are you sending emails on the right day and at the right time?  The highest open rates occur on Mondays and the highest click rates occur on Sundays.  Open rates peak during the early part of the day, between 8 a.m. and noon.

Is your email list updated regularly and cleaned of old, undeliverable email addresses?  Bounce rates are inescapable but can be improved if you send out emails on a regular basis.

Have you segmented your email list so you can tailor your content to your different audiences?  Targeted emails deliver 18 times more revenue than general blast emails.

Are your emails personalized? Personalizing the message content can boost open rates significantly.

Do you use a responsive design template so your emails are displayed properly for every screen size?  More than half of emails are now opened on mobile devices.

If your newsletters are performing at or above these benchmarks, you may still have some work to do: if you don’t know the source of your success, you can’t repeat it.

ARTICLE BY

OF

.bit: Why Brands Need to Pay Attention [VIDEO]

Sterne Kessler Goldstein Fox

Monica Riva Talley, director at the intellectual property law firm Sterne, Kessler, Goldstein & Fox, P.L.L.C., discusses the unregulated domain .bit and why brands need to pay attention to this “Wild West of the Internet.” As Ms. Talley explains, ‘.bit’ is unlike any customary domain and presents several areas of concern for intellectual property owners including cybersquatting, the use of pirated content, and the absence of oversight or control by any regulatory entity.

© 2014 Sterne Kessler
ARTICLE BY

OF

Protecting Trade Secrets in the Cloud

FINAL SW logo wLLP2

The business community’s growing use of cloud-based computing services provides great benefits due to cost-savings and mobile information access.  However, business leaders should understand the risks of storing valuable trade secrets in the cloud.  This article provides the business community tips on how to safeguard valuable trade secrets stored in the cloud from being freely disclosed to the public, thus putting the business at risk of losing protections that courts grant trade secrets.

As businesses’ profit margins have continued to shrink since the Great Recession, more companies have looked to reduce costs by reducing growing expenses related to their information technology departments.[1] The first line item to draw attention in the IT budget is frequently the rising costs associated with maintaining and upgrading system hardware.  Businesses often find that housing and operating multiple servers stretches IT budgets thin by increasing maintenance, labor, and operational costs.  The solution so many businesses have turned to is to move their valuable data to virtual servers, or the “cloud.”[2]  A recent survey of IT executives provides that companies will triple their IT spending on cloud-based services in 2014 over 2011.[3]  Cloud service providers have also seen demand increase as they increase their cloud capabilities.[4]

Although cloud-based servers provide businesses with substantial financial and operational benefits, businesses must recognize that there are perils to shifting data to the cloud.  One of the key concerns businesses should consider before moving data to the cloud is the risk that its valuable trade secrets will lose protection as a result of insufficient safeguards to protect against disclosure.  This article addresses that concern and provides businesses keys for seeking to protect valuable secrets in the cloud.

What is a Protectable Trade Secret

The initial step for a business to determine how to protect its trade secrets is to understand how the law characterizes a trade secret.  Information qualifies as a trade secret only if it derives independent economic value as a result of not being generally known or readily ascertainable, and be subject to reasonable efforts to maintain its secrecy.  Trade secrets are broadly defined as information, including technical or non-technical data, a formula, pattern, compilation, program, device, method, technique, drawing, process, financial data, strategies, pricing information, and lists of customers, prospective customers, and suppliers.

Businesses Need to Take Reasonable Efforts to Protect Trade Secrets in the Cloud

Trade secrets are only protectable when the owner takes reasonable efforts to prevent them from being freely disclosed to the public so that the information does not become generally known.

Information does not have to be cloaked in absolute secrecy to be a trade secret, as long as a business’s efforts to maintain secrecy or confidentiality are reasonable.  It is easy for one to imagine how a business may protect confidential documents that are stored locally.  Computer files may be password-protected with several layers of encryption software, with access limited to specified personnel.  Similarly, paper files may be stored in locked cabinets, in secured rooms, where only specified personnel are granted access.

However, those seemingly straight-forward security protocols become murky when information is stored in the cloud.  Unlike storing data on local servers, storing data in the cloud requires the owner to disclose confidential information to a third-party vendor.  In most situations, disclosing data to a third-party eliminates trade secret protections.   Therefore, businesses must take additional steps to ensure that its data remains secure.

Three Keys to Protecting Trade Secrets Stored in the Cloud

There are no fail-safe measures to protect data stored in the cloud.  The best way for a business to protect its trade secrets is to locally store and protect its most valuable data with the proper data security protocols.  A business, however, should not fear the cloud as long as it takes certain steps to ensure that it exercises reasonable efforts to protect its cloud-based data.

First, business leaders must conduct appropriate due diligence before selecting a cloud-provider.  The business should conduct necessary research to select a reputable, well-established company that has the physical and technological capabilities to store and protect data.

Conducting due diligence on a provider includes ensuring that the provider has taken necessary steps to establish appropriate physical and virtual security protocols to protect the confidentiality of your information.  Inquire how the provider establishes physical security measures, and monitoring capabilities to prevent unauthorized access to its data centers and infrastructure.  Also, learn how the provider limits its employees’ access to customer data and determine the internal controls that the provider has in place to prevent unauthorized viewing, copying, or emailing of customer information.

A business should also inquire about the provider’s virtual security protocols.  A business must generally understand how its cloud-provider’s encryption software and security management systems work to protect data.  If your business is not capable of independently evaluating whether the provider has proper security protocols, a good indicator is to ask the provider for its client list.  If the provider has clients that are typically security-conscious companies, such as financial institutions or healthcare facilities, that is a good indication that the provider has been vetted and it has proper security measures in place.  Finally, the provider should maintain sufficient data-protection insurance coverage to protect against potential data breaches or system failures.

Second, a business must have contractual safeguards in place with its cloud-provider to adequately protect its intellectual property and trade secrets.  The contract should establish that the business owns the data, that it will be segregated from other data groups, and that the business may enjoy unfettered access to the data.  The contract should specify that the business can demand that the data be deleted or returned request, and detail how the provider will purge the data to ensure that it is properly deleted upon termination of the relationship.  The contract should require regular data backup and recovery tests, while restricting the provider from accessing, using or copying data for its own purpose.  Finally, the contract should establish the provider’s obligations to notify the business of a data breach or system failure.

Third, a business should also consider adding multiple layers of authentication and encryption to data containing trade secrets before transmitting it to the cloud-provider.  However, a business should consider if the additional encryption efforts could adversely affect the business’s ability to access, utilize, and port data for its normal business use.

Conclusion

There are several financial and operational benefits for a business to store data in the cloud.  However, businesses must understand that there are also risks to storing its valuable trade secrets on virtual servers.  Businesses need to take reasonable efforts to protect the confidentiality and secrecy of its most valuable data and information.


[1] Dave Rosenberg.  Reducing IT Infrastructure Costs via Outsourcing.  May 7, 2009.  news.cnet.com/8301-13846_3-10235742-62.html

[2] Thor Olavsrud.  How Cloud Computing Helps Cut Costs, Boost Profits.  March 12, 2013. www.cio.com/article/730036/How_Cloud_Computing_Helps_Cut_Costs_Boost_Profits

[3] Andrew Horne. Transformational Change in IT Will Drive 2014 Spending.  November 5, 2013.  http://blogs.wsj.com/cio/2013/11/05/transformational-change-in-it-will-drive-2014-spending/

[4] IBM Commits $1.2bn to Cloud Data Centre Expansion.  January 17, 2014. www.bbc.co.uk/news/business-25773266

ICANN’s gTLD Program – A Look Back and Forward

Sterne Kessler Goldstein Fox

ICANN’s new Generic Top-Level Domain (gTLD) program has been in full swing for over a year now, so it seems an apt time to examine some statistics as to how brands are engaging with new gTLDs, utilizing the Trademark Clearinghouse (TMCH), and which new gTLDs may give .com a run for its money.

gTLD Registration

While ICANN is expecting more than 1,300 gTLDs to go live in the following years, for the moment only slightly more than 400 are available. Despite the relatively slow roll-out of new top level domains (the characters following the ‘.’ in a domain name), the total number of registrations within these new domains has exceed the one million mark.

To date, the top five strings sitting atop the gTLD registrations list are: .xyz, .club, .guru, .berlin, and .photography. The most popular new string .xyz, which is marketing itself as an alternative to the crowded .com registry, has amassed nearly 525,000 registrations alone.

Interest and Adoption by Top Brands

World Trademark Review (WTR) recently explored the .xyz domain registration of the 50 most valuable brands and found that 80% had either registered or blocked their brand in this space. WTR’s review also found evidence of prevalent cybersquatting; for example, a single individual currently owns the domains names “americanexpress,” “honda,” and “homedepot” in the .xyz space.

In general, the levels of brand adoption and interaction with the gTLD program overall remains inconsistent, with some brands significantly more pro-active than others in their fields. Even when it comes to the Trademark Clearinghouse (TMCH), companies traditionally known for brand protection, including RedBull, Nintendo, and Blackberry, have evidently decided not to register their marks with this rights protection database

Trademark Clearinghouse

The TMCH is ICANN’s centralized database of registered trademarks related to the new gTLD program. According to the most recent figures released by the TMCH, nearly 33,000 marks from 103 countries and covering 119 jurisdictions have been submitted. These marks represent protection for over 11,000 brands and businesses worldwide. Of the marks submitted, 87% have been registered by a trademark agent, approximately 50% for multiple years, and nearly 98% have been verified. The TMCH will still be accepting mark submissions and renewals indefinitely, and approximately 7,000 marks have been submitted since the beginning of the year. On November 5 of this year, the first group of TMCH registrations will be up for renewal.

The TMCH is also tasked with delivering Claims Notices to those attempting to register a domain name matching a trademarked term. In March the Clearinghouse revealed that in excess of 500,000 Claims Notices had been issued, and 95% of the infringing domain registrations were no longer being pursued. The TMCH hailed the number of delivered Claims Notices as an indication of a “high level of interest in trademarked terms from third parties,” and proof that “protection mechanisms are working.”

But, while these findings appear to suggest the success of defensive mechanisms, there are at least two alternative interpretations of the data that likely influence these numbers. First, many of the infringing domain registrations were likely the product of data-mining and unlikely to have been pursued regardless. The second is that the sheer number of Claims Notices being issued may be keeping individuals with valid applications on the sidelines. Regardless of the reasoning behind the Claims Notices, they are at least evidence of the popularity and interest surrounding the new gTLD program.

gTLD Round Two?

As the first expanded gTLD round rollout progresses towards conclusion, ICANN has begun planning the second round. The organization has stated publically that the next round is expected in 2016 at the earliest,” but experts believe 2017 is a more realistic time frame.

In preparation for the second round of gTLDs ICANN has published a Draft Work Plan. The 27 page document details several sets of reviews and activities scheduled to guide consideration for the second round of applications. The plan addressed program implementation reviews, root stability, rights protection, the GNSO, and competition, consumer trust, and choice reviews.

As the gTLD space continues to expand indefinitely, brands will have to continue to monitor and reassess how to navigate this dynamic landscape.