LinkedIn Petitions Circuit Court for En Banc Review of hiQ Scraping Decision

On October 11, 2019, LinkedIn Corp. (“LinkedIn”) filed a petition for rehearing en banc of the Ninth Circuit’s blockbuster decision in hiQ Labs, Inc. v. LinkedIn Corp., No. 17-16783 (9th Cir. Sept. 9, 2019). The crucial question before the original panel concerned the scope of Computer Fraud and Abuse Act (CFAA) liability to unwanted web scraping of publicly available social media profile data and whether once hiQ Labs, Inc. (“hiQ”), a data analytics firm, received LinkedIn’s cease-and-desist letter demanding it stop scraping public profiles, any further scraping of such data was “without authorization” within the meaning of the CFAA. The appeals court affirmed the lower court’s order granting a preliminary injunction barring LinkedIn from blocking hiQ from accessing and scraping publicly available LinkedIn member profiles to create competing business analytic products. Most notably, the Ninth Circuit held that hiQ had shown a likelihood of success on the merits in its claim that when a computer network generally permits public access to its data, a user’s accessing that publicly available data will not constitute access “without authorization” under the CFAA.

In its petition for en banc rehearing, LinkedIn advanced several arguments, including:

  • The hiQ decision conflicts with the Ninth Circuit Power Ventures precedent, where the appeals court held that a commercial entity that accesses a website after permission has been explicitly revoked can, under certain circumstances, be civilly liable under the CFAA. Power Ventures involved Facebook user data protected by password (that users initially allowed a data aggregator permission to access). LinkedIn argued that the hiQ court’s logic in distinguishing Power Ventures was flawed and that the manner in which a user classifies his or her profile data should have no bearing on a website owner’s right to protect its physical servers from trespass.

“Power Ventures thus holds that computer owners can deny authorization to access their physical servers within the meaning of the CFAA, even when users have authorized access to data stored on the owner’s servers. […] Nothing about a data owner’s decision to place her data on a website changes LinkedIn’s independent right to regulate who can access its website servers.”

  • The language of the CFAA should not be read to allow for “authorization” to be assumed (and unable to be revoked) for publicly available website data, either under Ninth Circuit precedent or under the CFAA-related case law of other circuits.

“Nothing in the CFAA’s text or the definition of ‘authorization’ that the panel employed—“[o]fficial permission to do something; sanction or warrant,” suggests that enabling websites to be publicly viewable is not ‘authorization’ that can be revoked.”

  • The privacy interests enunciated by LinkedIn on behalf of its users is “of exceptional importance,” and the court discounted the fact that hiQ is “unaccountable” and has no contractual relationship with LinkedIn users, such that hiQ could conceivably share the scraped data or aggregate it with other data.

“Instead of recognizing that LinkedIn members share their information on LinkedIn with the expectation that it will be viewed by a particular audience (human beings) in a particular way (by visiting their pages)—and that it will be subject to LinkedIn’s sophisticated technical measures designed to block automated requests—the panel assumed that LinkedIn members expect that their data will be ‘accessed by others, including for commercial purposes,’ even purposes antithetical to their privacy setting selections. That conclusion is fundamentally wrong.

Both website operators and open internet advocates will be watching closely to see if the full Ninth Circuit decides to rehear the appeal, given the importance of the CFAA issue and the prevalence of data scraping of publicly available website content. We will keep a close watch on developments.


© 2019 Proskauer Rose LLP.

Resist the Urge to Access: the Impact of the Stored Communications Act on Employer Self-Help Tactics

As an employer or manager, have you ever collected a resigning employee’s employer-owned laptop or cellphone and discovered that the employee left a personal email account automatically logged in? Did you have the urge to look at what the employee was doing and who the employee was talking to right before resigning? Perhaps to see if he or she was talking to your competitors or customers? If so, you should resist that urge.

The federal Stored Communications Act, 18 U.S.C. § 2701et seq., is a criminal statute that makes it an offense to “intentionally access[ ]without authorization a facility through which an electronic communication service is provided[ ]and thereby obtain[ ] . . . access to a[n] . . . electronic communication while it is in electronic storage  . . . .” It also creates a civil cause of action for victims of such offenses, remedied by (i) actual damages of at least $1,000; (ii) attorneys’ fees and court costs; and, potentially, (iii) punitive damages if the access was willful or intentional.

So how does this criminal statute apply in a situation in which an employee uses a personal email account on an employer-owned electronic device—especially if an employment policy confirms there is no expectation of privacy on the employer’s computer systems and networks? The answer is in the technology itself.

Many courts find that the “facility” referenced in the statute is the server on which the email account resides—not the company’s computer or other electronic device. In one 2013 federal case, a former employee left her personal Gmail account automatically logged in when she returned her company-owned smartphone. Her former supervisor allegedly used that smartphone to access over 48,000 emails on the former employee’s personal Gmail account. The former employee later sued her former supervisor and her former employer under the Stored Communications Act. The defendants moved to dismiss the claim, arguing, among other things, that a smartphone was not a “facility” under the statute.

While agreeing with that argument in principle, the court concluded that it was, in fact, Gmail’s server that was the “facility” for purposes of Stored Communications Act claims. The court also rejected the defendants’ arguments (i) that because it was a company-owned smartphone, the employee had in fact authorized the review, and (ii) that the former employee was responsible for any alleged loss of privacy, because she left the door open to the employer reviewing the Gmail account.

Similarly, in a 2017 federal case, a former employee sued her ex-employer for allegedly using her returned cell phone to access her Gmail account on at least 40 occasions. To assist in the prosecution of a restrictive covenant claim against the former employee, the former employer allegedly arranged to forward several of those emails to the employer’s counsel, including certain allegedly privileged emails between the former employee and her lawyer. The court denied the former employer’s motion to dismiss the claim based on those allegations.

Interestingly, some courts, including both in the above-referenced cases, draw a line on liability under the Stored Communication Act based on whether the emails that were accessed were already opened at the time of access. This line of reasoning is premised on a finding that opened-but-undeleted emails are not in “storage for backup purposes” under the Stored Communications Act. But this distinction is not universal.

In another 2013 federal case, for example, an individual sued his business partner under the Stored Communications Act after the defendant logged on to the other’s Yahoo account using his password. A jury trial resulted in a verdict for the plaintiff on that claim, and the defendant filed a motion for judgment as a matter of law. The defendant argued that she only read emails that had already been opened and that they were therefore not in “electronic storage” for “purposes of backup protection.” The court disagreed, stating that “regardless of the number of times plaintiff or defendant viewed plaintiff’s email (including by downloading it onto a web browser), the Yahoo server continued to store copies of those same emails that previously had been transmitted to plaintiff’s web browser and again to defendant’s web browser.” So again, the court read the Stored Communications Act broadly, stating that “the clear intent of the SCA was to protect a form of communication in which the citizenry clearly has a strong reasonable expectation of privacy.”

Based on the broad reading of the Stored Communications Act in which many courts across the country engage, employers and managers are well advised to exercise caution before reviewing an employee’s personal communications that may be accessible on a company electronic device. Even policies informing employees not to expect privacy on company computer systems and networks may not save the employer or manager from liability under the statute. So seek legal counsel if this opportunity presents itself upon an employee’s separation from the company. And resist the urge to access before doing so.


© 2019 Foley & Lardner LLP
For more on the Stored Communications Act, see the National Law Review Communications, Media & Internet law page.

Vimeo Hit with Class Action for Alleged Violations of Biometric Law

Vimeo, Inc. was sued last week in a class action case alleging that it violated the Illinois Biometric Information Privacy Act by “collecting, storing and using Plaintiff’s and other similarly situated individuals’ biometric identifiers and biometric information…without informed written consent.”

According to the Complaint, Vimeo “has created, collected and stored, in conjunction with its cloud-based Magisto service, thousands of “face templates” (or “face prints”)—highly detailed geometric maps of the face—from thousands of Magisto users.” The suit alleges that Vimeo creates these templates using facial recognition technology and “[E]ach face template that Vimeo extracts is unique to a particular individual, in the same way that a fingerprint or voiceprint uniquely identifies one and only one person.” The plaintiffs are trying to liken an image captured by facial recognition technology to a fingerprint by calling it a “faceprint.” Very creative in the wake of mixed reactions to the use of facial recognition technology in the Facebook and Shutterfly cases.

The suit alleges “users of Magisto upload millions of videos and/or photos per day, making videos and photographs a vital part of the Magisto experience….Users can download and connect any mobile device to Magistoto upload and access videos and photos to produce and edit their own videos….Unbeknownst to the average consumer, and in direct violation of…BIPA, Plaintiff…believes that Magisto’s facial recognition technology scans each and every video and photo uploaded to Magisto for faces, extracts geometric data relating to the unique points and contours (i.e., biometric identifiers) of each face, and then uses that data to create and store a template of each face—all without ever informing anyone of this practice.”

The suit further alleges that when a user uploads a photo, the Magisto service creates a template for each face depicted in the photo, and compares that face with others in its face database to see if there is a match. According to the Complaint, the templates are also able to recognize gender, age and location and are able to collect biometric information from non-users. All of this is done without consent of the individuals, and in alleged violation of BIPA.

Although we previously have seen some facial recognition cases alleging violation of BIPA, and there are numerous cases alleging violation of BIPA for collection of fingerprints in the employment setting, this case is a little different from those, and it will be interesting to watch.



Copyright © 2019 Robinson & Cole LLP. All rights reserved.
For more on biometrics & privacy see the National Law Review Communications, Media & Internet law page.

WIPO Launches UDRP for .CN and .中国 ccTLD

The World Intellectual Property Organization (WIPO) launched a Uniform Domain-Name Dispute-Resolution Policy (UDRP) for .CN and .中国 (China) country code Top-Level Domain (ccTLD), the first non-Chinese entity to do so. Previously, the China International Economic and Trade Arbitration Commission Online Dispute Solution Center (CIETAC ODRC) or the Hong Kong International Arbitration Center (HKIAC) were authorized by the China Internet Network Information Center (CNNIC) to handle domain name disputes for these domains. The .CN and .中国ccTLD is among the largest in the world with over 22 million registered domain names.

The WIPO UDRP for .CN and .中国 ccTLD is only applicable to .CN and .中国domain names that have been registered for less than three years.  In contrast to the conventional UDRP, the Chinese UDRP applies to domain names that are identical or confusingly similar, not only to a mark, but to any “name” in which the complainant has civil rights or interests.

The complainant must prove that either registration or use of the disputed domain name is in bad faith, but not both as in the traditional UDRP.  Examples of bath faith provided by WIPO include:

  • The purpose for registering or acquiring the domain name is to sell, rent or otherwise transfer the domain name registration to the complainant who is the owner of the name or mark or to a competitor of that complainant, and to obtain unjustified benefits;
  • The disputed domain name holder, on many occasions, registers domain Names in order to prevent owners of the names or marks from reflecting the names or the marks in corresponding domain names;
  • The disputed domain name holder has registered or acquired the domain name for the purpose of damaging the Complainant’s reputation, disrupting the Complainant’s normal business or creating confusion with the Complainant’s name or mark so as to mislead the public;
  • Other circumstances which may prove the bad faith.

The language of proceedings will be in Chinese unless otherwise agreed by the parties or determined by the Panel.  More information is available at WIPO’s site.


© 2019 Schwegman, Lundberg & Woessner, P.A. All Rights Reserved.

For more on internet IP concerns, see the National Law Review Intellectual Property law page.

Recent COPPA Settlements Offer Compliance Reminders

The recently announced FTC settlement with YouTube and its parent company, as well as the 2018 settlement between the New York Office of the Attorney General and Oath, have set a new bar when it comes to COPPA compliance.

The settlements offer numerous takeaways, including reminders to those that use persistent identifiers to track children online and deliver them targeted ads.  These takeaways include, but are not limited to the following.

FTC CID attorney Joseph Simons stated that “YouTube touted its popularity with children to prospective corporate clients … yet when it came to complying with COPPA, the company refused to acknowledge that portions of its platform were clearly directed to kids.”

First, under COPPA, a child-directed website or online service – or a site that has actual knowledge it’s collecting or maintaining personal information from a child – must give clear notice on its site of “what information it collects from children, how it uses such information and its disclosure practices for such information.”

Second, the website or service must give direct notice to parents of their practices “with regard to the collection, use, or disclosure of personal information from children.”

Third, prior to collecting personal information from children under 13, COPPA-covered companies must get verifiable parental consent.

COPPA’s definition of “personal information” specifically includes persistent identifiers used for behavioral advertising.  It is critical to note that third-party platforms are subject to COPPA when they have actual knowledge they are collecting personal information from users of a child-directed website.

In March 2019, the FTC handed down what, then, was the largest civil penalty ever for violations of COPPA following allegations that Musical.ly knew many of its users were children and still failed to seek parental consent.  There, the FTC charged that Musical.ly failed to provide notice on their website of the information they collect online from children, how they use it and their disclosure practices; failed to provide direct notice to parents; failed to obtain consent from parents before collecting personal information from children; failed to honor parents’ requests to delete personal information collected from children; and retained personal information for longer than reasonably necessary.

Content creators must know COPPA’s requirements.

If a platform hosting third-party content knows that content is directed to children, it is unlawful to collect personal information from viewers without getting verifiable parental consent.

While it may be fine for most commercial websites geared to a general audience to include a corner for children, it that portion of the website collects information from users, COPPA obligations are triggered.

Comprehensive COPPA policies and procedures to protect children’s privacy are a good idea.  As are competent oversight, COPPA training for relevant personnel, the identification of risks that could result in violations of COPPA, the design and implementation of reasonable controls to address the identified risks, the regular monitoring of the effectiveness of those controls, and the development and use of reasonable steps to select and retain service providers that can comply with COPPA.

The FTC and the New York Attorney General are serious about COPPA enforcement.  Companies should exercise caution with respect to such data collection practices.



© 2019 Hinch Newman LLP

Practical Tips and Tools for Maintaining ADA-Compliant Websites

Title III of the American with Disabilities Act (ADA), enacted in 1990, prohibits discrimination against disabled individuals in “places of public accommodation”—defined broadly to include private entities that offer commercial services to the public. 42 U.S.C. § 12181(7). Under the ADA, disabled individuals are entitled to the full and equal enjoyment of the goods, services, facilities, privileges, and accommodations offered by a place of public accommodation. Id. § 12182(a). To comply with the law, places of public accommodation must take steps to “ensure that no individual with a disability is excluded, denied services, segregated or otherwise treated differently than other individuals.” Id. § 12182(b)(2)(A)(iii).

In the years immediately following the enactment of the ADA, the majority of lawsuits alleging violations of Title III arose as a result of barriers that prevented disabled individuals from accessing brick-and-mortar businesses (i.e., a lack of wheelchair ramps or accessible parking spaces). However, the use of the Internet to transact business has become virtually ubiquitous since the ADA’s passage almost 30 years ago. As a result, lawsuits under Title III have proliferated in recent years against private businesses whose web sites are inaccessible to individuals with disabilities. Indeed, the plaintiffs’ bar has formed something of a cottage industry in recent years, with numerous firms devoted to issuing pre-litigation demands to a large number of small to mid-sized businesses, alleging that the businesses’ web sites are not ADA-accessible. The primary purpose of this often-effective strategy is to swiftly obtain a large volume of monetary settlements without incurring the costs of initiating litigation.

Yet despite this upsurge in web site accessibility lawsuits—actual and threatened—courts have not yet reached a consensus on whether the ADA even applies to web sites. As discussed above, Title III of the ADA applies to “places of public accommodation.” A public accommodation is a private entity that offers commercial services to the public. 42 U.S.C. § 12181(7). The First, Second, and Seventh Circuit Courts of Appeals have held that web sites can be a “place of public accommodation” without any connection to a brick-and-mortar store.1 However, the Third, Sixth, Ninth, and Eleventh Circuit Courts of Appeals have suggested that Title III applies only if there is a “nexus” between the goods or services offered to the public and a brick-and-mortar location.2 In other words, in the latter group of Circuits, a business that operates solely through the Internet and has no customer-facing physical location may be under no obligation to make its web site accessible to users with disabilities.

To make matters even less certain, neither Congress nor the Supreme Court has established a uniform set of standards for maintaining an accessible web site. The Department of Justice (DOJ) has, for years, signaled its intent to publish specific guidance regarding uniform standards for web site accessibility under the ADA. However, to date, the DOJ has not published such guidance and, given the agency’s present priorities, it is unlikely that it will issue such guidance in the near future. Accordingly, courts around the country have been called on to address whether specific web sites provide sufficient access to disabled users. In determining the standards for ADA compliance, several courts have cited to the Web Content Accessibility Guidelines (WCAG) 2.1, Level AA (or its predecessor, WCAG 2.0), a series of web accessibility guidelines published by World Wide Web Consortium, a nonprofit organization formed to develop uniform international standards across the Internet. While not law, the WCAG simply contain recommended guidelines for businesses regarding how their web sites can be developed to be accessible to users with disabilities. In the absence of legal requirements, however, businesses lack clarity on what, exactly, is required to comply with the ADA.

Nevertheless, given the proliferation of lawsuits in this area, businesses that sell goods or services through their web sites or have locations across multiple jurisdictions should take concrete steps to audit their web sites and address any existing accessibility barriers.

Several online tools exist which allow users to conduct free, instantaneous audits of any URL, such as those offered at https://tenon.io/ and https://wave.webaim.org/. However, companies should be aware that the reports generated by such tools can be under-inclusive in that they may not address every accessibility benchmark in WCAG 2.1. The reports also can be over-inclusive and identify potential accessibility issues that would not prevent disabled users from fully accessing and using a site. Accordingly, companies seeking to determine their potential exposure under Title III should engage experienced third-party auditors to conduct individualized assessments of their web sites. Effective audits typically involve an individual tester attempting to use assistive technology, such as screen readers, to view and interact with the target site. Businesses also should regularly re-audit their web sites, as web accessibility allegations often arise in connection with web sites which may have been built originally to be ADA-compliant, but have fallen out of compliance due to content additions or updates.

Companies building new web sites, updating existing sites, or creating remediation plans should consider working with web developers familiar and able to comply with the WCAG 2.1 criteria. While no federal court has held that compliance with WCAG 2.1 is mandatory under Title III, several have recognized the guidelines as establishing a sufficient level of accessibility for disabled users.Businesses engaging new web developers to design or revamp their sites should ask specific questions regarding the developers’ understanding of and ability to comply with WCAG 2.1 in the site’s development, and should memorialize any agreements regarding specific accessibility benchmarks with the web developer in writing.


See Carparts Distrib. Ctr., Inc. v. Auto. Wholesaler’s Ass’n of New England, Inc., 37 F.3d 12, 19 (1st Cir. 1994) (“By including ‘travel service’ among the list of services considered ‘public accommodations,’ Congress clearly contemplated that ‘service establishments’ include providers of services which do not require a person to physically enter an actual physical structure.”); Andrews v. Blick Art Materials, LLC, 268 F. Supp. 3d 381, 393 (E.D.N.Y. 2017); Doe v. Mut. of Omaha Ins. Co., 179 F.3d 557, 559 (7th Cir. 1999).

See Peoples v. Discover Fin. Servs., Inc., 387 F. App’x 179, 183 (3d Cir. 2010) (“Our court is among those that have taken the position that the term is limited to physical accommodations”) (citation omitted); Parker v. Metro. Life Ins. Co., 121 F.3d 1006, 1010-11 (6th Cir. 1997); Weyer v. Twentieth Century Fox Film Corp., 198 F.3d 1104, 1114 (9th Cir. 2000); Haynes v. Dunkin’ Donuts LLC, 741 F. App’x 752, 754 (11th Cir. 2018) (“It appears that the website is a service that facilitates the use of Dunkin’ Donuts’ shops, which are places of public accommodation.”).

See, e.g. Robles v. Domino’s Pizza, LLC, 913 F.3d 898, 907 (9th Cir. 2019) (holding that failure to comply with WCAG is not a per se violation of the ADA, but that trial courts “can order compliance with WCAG 2.0 as an equitable remedy if, after discovery, the website and app fail to satisfy the ADA.”).


© 2019 Vedder Price
This article was written by Margaret G. Inomata and Harrison Thorne of Vedder Price.
For more web-related legal issues, see the National Law Review Communications, Media & Internet law page.

Will Technology Return Shame to Our Society?

The sex police are out there on the streets
Make sure the pass laws are not broken

Undercover (of the Night)The Rolling Stones

So, now we know that browsing porn in “incognito” mode doesn’t prevent those sites from leaking your dirty data courtesy of the friendly folks at Google and Facebook.  93 per cent of porn sites leak user data to a third party. Of these, Google tracks about 74 per cent of the analyzed porn sites, while Oracle tracks nearly 24 per cent sites and Facebook tracks nearly 10 per cent porn sites.  Yet, despite such stats, 30 per cent of all internet traffic still relates to porn sites.

The hacker who perpetrated the enormous Capital One data beach outed herself by oversharing on GitHub.  Had she been able to keep her trap shut, we’d probably still not know that she was in our wallets.  Did she want to get caught, or was she simply unashamed of having stolen a Queen’s ransom worth of financial data?

Many have lamented that shame (along with irony, truth and proper grammar) is dead.  I disagree.  I think that shame has been on the outward leg of a boomerang trajectory fueled by technology and is accelerating on the return trip to whack us noobs in the back of our unsuspecting heads.

Technology has allowed us to do all sorts of stuff privately that we used to have to muster the gumption to do in public.  Buying Penthouse the old-fashioned way meant you had to brave the drugstore cashier, who could turn out to be a cheerleader at your high school or your Mom’s PTA friend.  Buying the Biggie Bag at Wendy’s meant enduring the disapproving stares of vegans buying salads and diet iced tea.  Let’s not even talk about ED medication or baldness cures.

All your petty vices and vanity purchases can now be indulged in the sanctity of your bedroom.  Or so you thought.  There is no free lunch, naked or otherwise, we are coming to find.  How will society respond?

Country music advises us to dance like no one is watching and to love like we’ll never get hurt. When we are alone, we can act closer to our baser instincts.  This is why privacy is protective of creativity and subversive behaviors, and why in societies without privacy, people’s behavior regresses toward the most socially acceptable responses.  As my partner Ted Claypoole wrote in Privacy in the Age of Big Data,

“We all behave differently when we know we are being watched and listened to, and the resulting change in behavior is simply a loss of freedom – the freedom to behave in a private and comfortable fashion; the freedom to allow the less socially -careful branches of our personalities to flower. Loss of privacy reduces the spectrum of choices we can make about the most important aspects of our lives.

By providing a broader range of choices, and by freeing our choices from immediate review and censure from society, privacy enables us to be creative and to make decisions about ourselves that are outside the mainstream. Privacy grants us the room to be as creative and thought-provoking as we want to be. British scholar and law dean Timothy Macklem succinctly argues that the “isolating shield of privacy enables people to develop and exchange ideas, or to foster and share activities, that the presence or even awareness of other people might stifle. For better and for worse, then, privacy is a sponsor and guardian to the creative and the subversive.”

For the past two decades we have let down our guard, exercising our most subversive and embarrassing expressions of id in what we thought was a private space. Now we see that such privacy was likely an illusion, and we feel as if we’ve been somehow gas lighted into showing our noteworthy bad behavior in the disapproving public square.

Exposure of the Ashley Madison affair-seeking population should have taught us this lesson, but it seems that each generation needs to learn in its own way.

The nerds will, inevitably, figure out how to continue to work and play largely unobserved.  But what of the rest of us?  Will the pincer attack of the advancing surveillance state and the denizens of the Dark Web bring shame back as a countervailing force to govern our behavior?  Will the next decade be marked as the New Puritanism?

Dwight Lyman Moody, a predominant 19th century evangelist, author, and publisher, famously said, “Character is what you are in the dark.”  Through the night vision goggles of technology, more and more of your neighbors can see who you really are and there are very few of us who can bear that kind of scrutiny.  Maybe Mick Jagger had it right all the way back in 1983, when he advised “Curl up baby/Keep it all out of sight.”  Undercover of the night indeed.



Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.

Internet of Things: The Global Regulatory Ecosystem and the Most Promising Smart Environments Part II

Regulatory Ecosystem

Hyperconnectivity is a real phenomenon and it is changing the concerns of society because of the kinds of interactions that can be brought about by IoT devices, which could be: i) People to people; ii) People to things (objects, machines); iii) Things/machines to things/machines.

It gives rise to different issues for people. According to a European Survey, 72% of EU Internet users worry that too much of their personal data is being shared online and that they have little control over what happens to this information[1]. It gives rise to inevitable ethical issues and its relationship with the techno environment.

The discussion on ethics that follows aims to provide a quick tour on general ethical principles and theories that are available as they may apply to IoT[2]. Law and ethics are overlapping, but ethics goes beyond law. Thus, a comparison of law and ethics is made and their differences are pointed out in the great work of Spyros G Tzafestas, who wrote Ethics and Law in the Internet of Things World. In this article, he considers that the risks and  harms in a digital world are very high and complex, especially explaining those tech terms and their impact in our private life. Thus, it is of primary importance to review IoT and understand the limitations of protective legal, regulatory and ethical frameworks, in order to provide sound recommendations for maximizing good and minimizing harm[3].

Major data security concerns have also been raised with respect to ‘cloud’-supported IoT. Cloud computing (‘the cloud’) essentially consists of the concentration of resources, e.g. hardware and software, into a few physical locations by a cloud service provider (e.g. Amazon Web Service)[4]. We are living in a data-sharing storm and the economic impact of IoT’s cyber risks is increasing with the integration of digital infrastructure in the digital economy[5]. We are surrounded by devices which contain our data, for instance:

  • Wearable health technologies: wearable devices that continuously monitor the health status of a patient or gather real-world information about the patient such as heart rate, blood pressure, fever;
  • Wearable textile technologies: clothes that can change their color on demand or based on the biological condition of the wearer or according to the wearer’s emotions;
  • Wearable consumer electronics: wristbands, headbands, rings, smart glasses, smart watches, etc[6].

As a result of the serious impact IoT may have and because it involves a huge number of connected devices, it creates a new social, political, economic, and ethical landscape. Therefore, for a sustainable development of IoT, political and economic decision-making bodies have to develop proper regulations in order to be able to control the fair use of IoT in society.

In this sense, the most developed regions as regards establishing IoT Regulations and an ethical framework are the European Union and the United States both of which have enacted:

  • Legislation/regulations.
  • Ethics principles, rules and codes.
  • Standards/guidelines;
  • Contractual arrangements;
  • Regulations for the devices connected;
  • Regulations for the networks and their security; and
  • Regulations for the data associated with the devices.

In light of this, the next section will deal with Data Protection Regulations, Consumer Protection Acts, IoT and Cyber Risks Laws, Roadmap for Standardization of Regulations, Risk Maturity, Strategy Design and Impact Assessment related with 2020 scenario, which is: 200 billion sensor devices and market size that, by 2025, will be between $2.7 trillion and $3 trillion a year.

Europe

The Alliance for Internet of Things Innovation (AIOTI) was initiated by the European Commission in order to open a stream of dialogue between European stakeholders within the Internet of Things (IoT) market. The overall goal of this initiative was the creation of a dynamic European IoT ecosystem to unleash the potential of IoT.

In October 2015, the Alliance published 12 reports covering IoT policy and standards issues. It provided detailed recommendations for future collaborations in the Internet of Things Focus Area of the 2016-2017 Horizon 2020 programme[7].

The IoT regulation framework in Europe is a growth sector:

  • EU Directive-2013/40: this Directive deals with “Cybercrime” (i.e., attacks against information systems). It provides definitions of criminal offences and sets proper sanctions for attacks against information systems[8].
  • EU NIS Directive 2016/1148: this Network and Information Security (NIS) Directive concerns “Cybersecurity” issues. Its aim is to provide legal measures to assure a common overall level of cybersecurity (network/information security) in the EU, and an enhanced coordination degree among EU Members[9].
  • EU Directive 2014/53: this Directive “On the harmonization of the laws of the member states relating to the marketing of radio equipment”[10] is concerned with the standardization issue which is important for the joint and harmonized development of technology in the EU.
  • EU GDPR: European General Data Protection Regulation 2016/679: this regulation concerns privacy, ownership, and data protection and replaces EU DPR-2012. It provides a single set of rules directly applicable in the EU member states.
  • EU Connected Communities Initiative: this initiative concerns the IoT development infrastructure, and aims to collect information from the market about existing public and private connectivity projects that seek to provide high-speed broadband (more than 30 Mbps).

United States

A quick overview of the general US legislation that protects civil rights (employment, housing, privacy, information, data, etc.) includes:

  • Fair Housing Act (1968);
  • Fair Credit Reporting Act (1970);
  • Electronic Communication Privacy Act (1986), which is applied to service providers that transmit data, the Privacy Act 1974 which is based on the Fair Information Practice Principle (FIPP) Guidelines;
  • Breach Notification Rule which requires companies utilizing health data to notify consumers that are affected by the occurrence of any data breach; and
  • IoT Cybersecurity Improvement Act 2019: the Bill seeks “[t]o leverage Federal Government procurement power to encourage increased cybersecurity for Internet of Things devices.” In other words, this bill aims to shore up cybersecurity requirements for IoT devices purchased and used by the federal government, with the aim of affecting cybersecurity on IoT devices more broadly.
  • SB-327 Information privacy: connected devices: California’s new SB 327 law, which will take effect in January 2020, requires all “connected devices” to have a “reasonable security feature.”

The above legislation is general, and in principle can cover IoT activities, although it was not designed with IoT in mind. Legislation devoted particularly to IoT includes the following:

  • White House Initiative 2012: the purpose of this initiative is to specify a framework for protecting the privacy of the consumer in a networked work.

This initiative involves a report on a ‘Consumer Bill of Rights” which is based on the so-called “Fair Information Practice Principles” (FIPP). This includes two principles:

  1. Respect for Context Principle: consumers have a right to insist that the collection, use, and disclosure of personal data by Companies is done in ways that are compatible with the context in which consumers provide the data;
  2. Individual Control Principle: consumers have a right to exert control over the personal data companies collect from them or how they use it.

China

Where we start to see the most advanced picture is in China. In 2017, the Ministry of Industry and Information Technology (MIIT), China’s telecom regulator and industrial policy maker, issued the Circular on Comprehensively Advancing the Construction and Development of Mobile Internet of Things (NB-IoT) (MIIT Circular [2017] No. 351, the “Circular”), with the following approach in the opening provisions:

Building a wide-coverage, large-connect, low-power mobile Internet of Things (NB-IoT) infrastructure and developing applications based on NB-IoT technology will help promote the construction of network powers and manufacturing powers, and promote “mass entrepreneurship, innovation” and “Internet +” development. In order to further strengthen the IoT application infrastructure, promote the deployment of NB-IoT networks and expand industry applications, and accelerate the innovation and development of NB-IoT[11]

Nowadays China already has a huge packet of regulation on technological matters:

  • 2015 State Council – China Computer Information System Security Protection Regulation (first in 1994);
  • 2007 MPS – Management Method for Information Security Protection for Classified Levels;
  • 2001 NPC Standing Committee – Resolution about Protection of Internet Security;
  • 2012 NPC Standing Committee – Resolution about Enhance Network Information Protection;
  • July 2015: National Security Law – ‘secure and controllable’ systems and data security in critical infrastructure and key areas;
  • 2014 MIIT – Guidance on Enhance Telecom and Internet Security;
  • 2013 MIIT – Regulation about Telecom and Internet Personal Information Protection
  • 2014 China Banking Regulatory Commission – Guidance for Applying Secure and Controllable Information;
  • Technology to Enhance Banking Industry Cybersecurity and Informatization Development

Further, as if this were not enough, the Chinese government is being proactive and has several important laws and regulations in the Pipeline, as it can be seen from the list below:

  • CAC: Administrative Measures on Internet Information Services;
  • CAC Rules on Security Protection for Critical Information Infrastructure;
  • Cybersecurity Law;
  • Cyber Sovereignty;
  • Security of Product and Service;
  • Security of Network Operation (Classified Levels Protection, Critical Infrastructure);
  • Data Security (Category, Personal Information);
  • Information Security.

Finally, China established, in 2016, the National Information Security Standardization Technical Committee and its current work is developing a Standardization – TC260 (IT Security) on Technical requirement for Industrial network protocol and general reference model and requirements for Machine-to-Machine (M2M) security.

Latin America

The Latin American countries have different levels of development and this sets up a huge asymmetry between the domestic legal frameworks. The following is a quick regulation overview on Latin American countries:

  • Brazil has the “National IoT Plan” (Decree N. 9.854/2019) that aims to ensure the development of public policies for this technology sector and members of Brazilian parliament presented the bill No. 7.656/17 with the purpose of eliminating tax charges on IoT products;
  • Colombia has a Draft of Law No. 152/2018 on the Modernization of the Information and Communication providing investments incentives to IT Techs (article 3);
  • Chile has a new Draft Law Boletín N° 12.192-25/2018 on Cyber crimes and regulation on internet devices and hackers attacks;
  • In 2017, Argentina launched a Public Consultation on IoT regarding regulations that must be updated and how to get more security and improve the technological level of the country[12].

Most Promising Smart Environments

Smart environments are regarded as the space within which IoT devices interact connected through a continuous network. Thus, smart environments aim to satisfy the experience of individuals from every environment, by replacing the hazardous work, physical labor and repetitive tasks with automated agents. Generally speaking, sensors are the basis of these kind of smart devices with many different applications e.g. Smart Parking, Waste Management, Smart Roads and Traffic Congestion, Air Pollution, River Floods, M2M Applications, Vehicle auto-diagnosis, Smart Farming, Energy and Water Uses, Medical and Health Smart applications, etc[13].

Another way of looking at smart environments and assess their relative capacity to produce business opportunities is to identify and examine the most important IoT use cases that are either already being exploited or will be fully exploited by 2020.

For the purposes of this article, the approach was restricted to sectors consisting of the most promising smart environments to be developed up to 2020 in the European Market as displayed in the Chart below:

Vertical IOT Market Size in Europe
Vertical IoT Market Size in Europe

 

The conclusions of the last report of the European Commission are impressive and can help to understand the continuous development of the IoT market and how every market has to comply with the law and they will emerge facing a regulatory avalanche as mentioned in item 2 on the Regulatory Ecosystem.

Final Considerations: IoT as Consumer Product Health and Safety

IoT safety is becoming more important every day. On the one hand, as mentioned above, most concerns for IoT safety are primarily in the areas of cyber-attacks, hacking, data privacy, and similar topics; what is better referred to as security than safety. On the other hand, it can be approached by physical safety hazards which may result from the operation of consumer products in an IoT environment or system. IoT provides a new way to approach business and it is not restricted to one or other market or topic. It is a metatopic or metamarket showing different possibilities and applications and will be spread in the near future.

In general, IoT products are electrical or electronic applications with a power source and a battery connected by a charging device. So long as the power source, batteries and charging devices are present we have the usual risks of electrical related hazards (fire, burns, electrical shock, etc.). Nonetheless, IoT makes matters more complicated as smart devices have the function to send commands and control devices in the real world.

IoT applications can switch the main electrical powers of secondary products or can operate complex motor systems and so on. Then they have to be accurate and might provide minimal requirements to care of consumer health and safety. Risk assessment and hazard mitigations will have to adapt to IoT applications reinventing new methods to assure regular standards of IoT usability. Traditional health and safety regulations might be up to date with this new technological reality to be effective at reducing safety hazards for consumer products.

To conclude, this article was intended to summarize two main issues: I) IoT as an increasing and cross topic market which will become a present reality closer to our daily lives; II) IoT will be regulated and become an important concern in consumer product health and safety.

See the first Installment of the IoT:  Seizing the Benefits and Addressing the Challenges and the Vision of IoT in 2020.


[1] Nóra Ni Loideain. Port in the Data-Sharing Storm: The GDPR and the Internet of Things. King’s College London Dickson Poon School of Law Legal Studies Research Paper Series: Paper No. 2018-27.P2.

[2] Spyros G Tzafestas. Ethics and Law in the Internet of Things World. Smart Cities 2018, 1(1), 98-120. P. 102.

[3] Spyros G Tzafestas. Ethics and Law in the Internet of Things World. Smart Cities 2018, 1(1), 98-120. P. 99;

[4] Nóra Ni Loideain. Port in the Data-Sharing Storm: The GDPR and the Internet of Things. King’s College London Dickson Poon School of Law Legal Studies Research Paper Series: Paper No. 2018-27.P. 19.

[5] Petar Radanliev, David Charles De Roure and others. Definition of Internet of Things (IoT) Cyber Risk – Discussion on a Transformation Roadmap for Standardization of Regulations, Risk Maturity, Strategy Design and Impact Assessment. Oxford University. MPRA Paper No. 92569, March 2019, P. 1.

[6] pSyros G Tzafestas. Ethics and Law in the Internet of Things World. Smart Cities 2018, 1(1), 98-120. P. 101; https://doi.org/10.3390/smartcities1010006

[7] More information available here.

[8] EUR-Lex Document 32013L0040. Directive 2013/40/EU of the European Parliament and the Council of 12 August 2013. Available here.

[9] NIS Directive. The Directive on Security of Network and Information Systems.

[10] EUR-Lex Document 32014L0053. Directive 2014/53/EU of the European Parliament and the Council of 16 April 2014.

[11] Notice of the General Office of the Ministry of Industry and Information Technology on Promoting the Development of Mobile Internet of Things. Department of Industry communication letter [2017] No. 351.

[12] Available here.

[13] More examples


Copyright © 2019 Compliance and Risks Ltd.
This article was written by João Pedro Paro from Compliance & Risks.

Utah to Test Blockchain Voting Through Mobile Apps

As we head toward 2020, expect significant public debate relating to smartphone applications designed to increase turnout and participation in upcoming elections. The Democratic Party has dipped its toe in the water by announcing in July plans to allow telephone voting in lieu of appearing for neighborhood caucus meetings in the key early primary states of Iowa and Nevada.

Given concerns regarding security and reliability of submitting votes over the internet, jurisdictions around the country have begun to test solutions involving blockchain technology to allow absentee voters to submit voting ballots. Following initial pilot programs in Denver and West Virginia, Utah County, Utah will be the next jurisdiction to utilize a blockchain-based mobile in connection with its upcoming municipal primary and general elections.

The pilot program, which will utilize the mobile voting application “Voatz”, will allow active-duty military, their eligible dependents and overseas voters to cast absentee ballots. Eligible voters will need to apply for an absentee ballot with the county clerk and then download the mobile application. The ballot itself will be unlocked using the smartphone’s biometric data (i.e., a fingerprint or facial recognition) and then will be distributed into the blockchain framework for tabulation.

Copyright © 2019 Robinson & Cole LLP. All rights reserved.
This article was written by Benjamin C. Jensen of Robinson & Cole LLP.

Hush — They’re Listening to Us

Apple and Google have suspended their practice of reviewing recordings from users interacting with their voice assistant programs. Did you know this was happening to begin with?

These companies engaged in “grading,” a process where they review supposedly anonymized recordings of conversations people had with voice assistant program like Siri. A recent Guardian article revealed that these recordings were being passed on to service providers around the world to evaluate whether the voice assistant program was prompted intentionally, and the appropriateness of their responses to the questions users asked.

These recordings can include a user’s most private interactions and are vulnerable to being exposed. Google acknowledged “misconduct” regarding a leak of Dutch language conversation by one of its language experts contracted to refine its Google Assistant program.

Reports indicate around 1,000 conversations, captured by Google Assistant (available in Google Home smart speakers, Android devices and Chromebooks) being leaked to Belgian news outlet VRT NWS. Google audio snippets are not associated with particular user accounts as part of the review process, but some of those messages revealed sensitive information such as medical conditions and customer addresses.

Google will suspend using humans to review these recordings for at least three months, according to the Associated Press. This is yet another friendly reminder to Google Assistant users that they can turn off storing audio data to their Google account completely, or choose to auto-delete data after every three months or 18 months. Apple is also suspending grading and will review their process to improve their privacy practice.

Despite Google and Apple’s recent announcement, enforcement authorities are still looking to take action. German regulator, the Hamburg Commissioner for Data Protection and Freedom of Information, notified Google of their plan to use Article 66 powers of the General Data Protection Regulation (GDPR) to begin an “urgency procedure.” Since the GDPR’s implementation, we haven’t seen this enforcement action utilized, but its impact is significant as it allows the enforcement authorities to halt data processing when there is “an urgent need to act in order to protect the rights and freedoms of data subjects.”

While Google allows users to opt out of some uses of their recordings; Apple has not provided users that ability other than by disabling Siri entirely. Neither privacy policy explicitly warned users of these recordings but do reserve the right to use the information collected to improve their services. Apple, however, disclosed that they will soon provide a software update to allow Siri users opt-out of participation in grading.

Since we’re talking about Google Assistant and Siri, we have to mention the third member of the voice assistant triumvirate, Amazon’s Alexa. Amazon employs temporary workers to transcribe the voice commands of its Alexa. Users can opt out of “Help[ing] Improve Amazon Services and Develop New Features” and allowing their voice recordings to be evaluated.

Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.