The settlements offer numerous takeaways, including reminders to those that use persistent identifiers to track children online and deliver them targeted ads. These takeaways include, but are not limited to the following.
FTC CID attorney Joseph Simons stated that “YouTube touted its popularity with children to prospective corporate clients … yet when it came to complying with COPPA, the company refused to acknowledge that portions of its platform were clearly directed to kids.”
First, under COPPA, a child-directed website or online service – or a site that has actual knowledge it’s collecting or maintaining personal information from a child – must give clear notice on its site of “what information it collects from children, how it uses such information and its disclosure practices for such information.”
Second, the website or service must give direct notice to parents of their practices “with regard to the collection, use, or disclosure of personal information from children.”
Third, prior to collecting personal information from children under 13, COPPA-covered companies must get verifiable parental consent.
COPPA’s definition of “personal information” specifically includes persistent identifiers used for behavioral advertising. It is critical to note that third-party platforms are subject to COPPA when they have actual knowledge they are collecting personal information from users of a child-directed website.
In March 2019, the FTC handed down what, then, was the largest civil penalty ever for violations of COPPA following allegations that Musical.ly knew many of its users were children and still failed to seek parental consent. There, the FTC charged that Musical.ly failed to provide notice on their website of the information they collect online from children, how they use it and their disclosure practices; failed to provide direct notice to parents; failed to obtain consent from parents before collecting personal information from children; failed to honor parents’ requests to delete personal information collected from children; and retained personal information for longer than reasonably necessary.
Content creators must know COPPA’s requirements.
If a platform hosting third-party content knows that content is directed to children, it is unlawful to collect personal information from viewers without getting verifiable parental consent.
While it may be fine for most commercial websites geared to a general audience to include a corner for children, it that portion of the website collects information from users, COPPA obligations are triggered.
Comprehensive COPPA policies and procedures to protect children’s privacy are a good idea. As are competent oversight, COPPA training for relevant personnel, the identification of risks that could result in violations of COPPA, the design and implementation of reasonable controls to address the identified risks, the regular monitoring of the effectiveness of those controls, and the development and use of reasonable steps to select and retain service providers that can comply with COPPA.
The FTC and the New York Attorney General are serious about COPPA enforcement. Companies should exercise caution with respect to such data collection practices.
Google’s team of hackers – working on Project Zero – say the cyberattack occurred when Apple users visited a seemingly genuine webpage, with the spyware then installing itself on their phones. It was capable of then sending the user’s texts, emails, photos, real-time location, contacts, account details (you get the picture) almost instantaneously back to the perpetrators of the hack (which some reports suggest was a nation state). The hack wasn’t limited to Apple apps either, with reports the malware was able to extract data from WhatsApp, GoogleMaps and Gmail.
For us, the scare factor goes beyond data from our smart devices inadvertently revealing secret locations, or being used against us in court – the data and information the cyberspies could have had access to could wreak absolute havoc on the everyday iPhone users’ (and, the people whose details they have in their phones) lives.
We’re talking about this in past tense because while it was only discovered by Project Zero recently, Apple reportedly fixed the vulnerability without much ado in February this year, by releasing a software update.
So how do you protect yourself from being spied on? It seems there’s no sure-fire way to entirely prevent yourself from becoming a victim, or, if you were a victim of this particular attack, to mitigate the damage. But, according to Apple, “keeping your software up to date is one of the most important things you can do to maintain your Apple product’s security”. We might not be ignoring those pesky “a new update is available for your phone” messages, anymore.
Title III of the American with Disabilities Act (ADA), enacted in 1990, prohibits discrimination against disabled individuals in “places of public accommodation”—defined broadly to include private entities that offer commercial services to the public. 42 U.S.C. § 12181(7). Under the ADA, disabled individuals are entitled to the full and equal enjoyment of the goods, services, facilities, privileges, and accommodations offered by a place of public accommodation. Id. § 12182(a). To comply with the law, places of public accommodation must take steps to “ensure that no individual with a disability is excluded, denied services, segregated or otherwise treated differently than other individuals.” Id. § 12182(b)(2)(A)(iii).
In the years immediately following the enactment of the ADA, the majority of lawsuits alleging violations of Title III arose as a result of barriers that prevented disabled individuals from accessing brick-and-mortar businesses (i.e., a lack of wheelchair ramps or accessible parking spaces). However, the use of the Internet to transact business has become virtually ubiquitous since the ADA’s passage almost 30 years ago. As a result, lawsuits under Title III have proliferated in recent years against private businesses whose web sites are inaccessible to individuals with disabilities. Indeed, the plaintiffs’ bar has formed something of a cottage industry in recent years, with numerous firms devoted to issuing pre-litigation demands to a large number of small to mid-sized businesses, alleging that the businesses’ web sites are not ADA-accessible. The primary purpose of this often-effective strategy is to swiftly obtain a large volume of monetary settlements without incurring the costs of initiating litigation.
Yet despite this upsurge in web site accessibility lawsuits—actual and threatened—courts have not yet reached a consensus on whether the ADA even applies to web sites. As discussed above, Title III of the ADA applies to “places of public accommodation.” A public accommodation is a private entity that offers commercial services to the public. 42 U.S.C. § 12181(7). The First, Second, and Seventh Circuit Courts of Appeals have held that web sites can be a “place of public accommodation” without any connection to a brick-and-mortar store.1 However, the Third, Sixth, Ninth, and Eleventh Circuit Courts of Appeals have suggested that Title III applies only if there is a “nexus” between the goods or services offered to the public and a brick-and-mortar location.2 In other words, in the latter group of Circuits, a business that operates solely through the Internet and has no customer-facing physical location may be under no obligation to make its web site accessible to users with disabilities.
To make matters even less certain, neither Congress nor the Supreme Court has established a uniform set of standards for maintaining an accessible web site. The Department of Justice (DOJ) has, for years, signaled its intent to publish specific guidance regarding uniform standards for web site accessibility under the ADA. However, to date, the DOJ has not published such guidance and, given the agency’s present priorities, it is unlikely that it will issue such guidance in the near future. Accordingly, courts around the country have been called on to address whether specific web sites provide sufficient access to disabled users. In determining the standards for ADA compliance, several courts have cited to the Web Content Accessibility Guidelines (WCAG) 2.1, Level AA (or its predecessor, WCAG 2.0), a series of web accessibility guidelines published by World Wide Web Consortium, a nonprofit organization formed to develop uniform international standards across the Internet. While not law, the WCAG simply contain recommended guidelines for businesses regarding how their web sites can be developed to be accessible to users with disabilities. In the absence of legal requirements, however, businesses lack clarity on what, exactly, is required to comply with the ADA.
Nevertheless, given the proliferation of lawsuits in this area, businesses that sell goods or services through their web sites or have locations across multiple jurisdictions should take concrete steps to audit their web sites and address any existing accessibility barriers.
Several online tools exist which allow users to conduct free, instantaneous audits of any URL, such as those offered at https://tenon.io/ and https://wave.webaim.org/. However, companies should be aware that the reports generated by such tools can be under-inclusive in that they may not address every accessibility benchmark in WCAG 2.1. The reports also can be over-inclusive and identify potential accessibility issues that would not prevent disabled users from fully accessing and using a site. Accordingly, companies seeking to determine their potential exposure under Title III should engage experienced third-party auditors to conduct individualized assessments of their web sites. Effective audits typically involve an individual tester attempting to use assistive technology, such as screen readers, to view and interact with the target site. Businesses also should regularly re-audit their web sites, as web accessibility allegations often arise in connection with web sites which may have been built originally to be ADA-compliant, but have fallen out of compliance due to content additions or updates.
Companies building new web sites, updating existing sites, or creating remediation plans should consider working with web developers familiar and able to comply with the WCAG 2.1 criteria. While no federal court has held that compliance with WCAG 2.1 is mandatory under Title III, several have recognized the guidelines as establishing a sufficient level of accessibility for disabled users.3 Businesses engaging new web developers to design or revamp their sites should ask specific questions regarding the developers’ understanding of and ability to comply with WCAG 2.1 in the site’s development, and should memorialize any agreements regarding specific accessibility benchmarks with the web developer in writing.
1 See Carparts Distrib. Ctr., Inc. v. Auto. Wholesaler’s Ass’n of New England, Inc., 37 F.3d 12, 19 (1st Cir. 1994) (“By including ‘travel service’ among the list of services considered ‘public accommodations,’ Congress clearly contemplated that ‘service establishments’ include providers of services which do not require a person to physically enter an actual physical structure.”); Andrews v. Blick Art Materials, LLC, 268 F. Supp. 3d 381, 393 (E.D.N.Y. 2017); Doe v. Mut. of Omaha Ins. Co., 179 F.3d 557, 559 (7th Cir. 1999).
2 See Peoples v. Discover Fin. Servs., Inc., 387 F. App’x 179, 183 (3d Cir. 2010) (“Our court is among those that have taken the position that the term is limited to physical accommodations”) (citation omitted); Parker v. Metro. Life Ins. Co., 121 F.3d 1006, 1010-11 (6th Cir. 1997); Weyer v. Twentieth Century Fox Film Corp., 198 F.3d 1104, 1114 (9th Cir. 2000); Haynes v. Dunkin’ Donuts LLC, 741 F. App’x 752, 754 (11th Cir. 2018) (“It appears that the website is a service that facilitates the use of Dunkin’ Donuts’ shops, which are places of public accommodation.”).
3 See, e.g. Robles v. Domino’s Pizza, LLC, 913 F.3d 898, 907 (9th Cir. 2019) (holding that failure to comply with WCAG is not a per se violation of the ADA, but that trial courts “can order compliance with WCAG 2.0 as an equitable remedy if, after discovery, the website and app fail to satisfy the ADA.”).
As real-world celebrities continue to expand the reach of their persona into the digital realm, the potential benefit for advertisers, game developers and esports event promoters is exceedingly high. But with increased opportunity comes increased risk.
A New York Supreme Court recently addressed this risk when it construed the State’s right of publicity statute[1] in a dispute over an NBA 2K18 video game avatar. In Champion v. Take Two Interactive Software, Inc., celebrity basketball entertainer Phillip “Hot Sauce” Champion sued the video game developer, alleging violation of his right to privacy for Take-Two’s use of his name and likeness. The Court ultimately dismissed the lawsuit, but not before it provided a helpful discussion of New York’s publicity statute and its modern application to the esports industry.
A Primer on New York’s Publicity Statute
New York publicity law allows both criminal charges and civil liability for use of a person’s “name, portrait or picture” for advertising or trade purposes without prior written permission. This right to publicity extends to any recognizable likeness that has a “close and purposeful resemblance to reality.” Courts have already held that video game avatars are within the scope of the statute’s reach.
However, while seemingly broad at first pass, this statutory right is actually more narrow than similar rights in other states where the right to publicity is recognized only at common law (i.e., in states that have no black-letter publicity statute). For example, in New York, neither “incidental” use of a person’s name or likeness, nor use that is protected under the First Amendment, are violations.
Further, unlike the words “portrait” and “picture,” the word “name” in the statute is construed literally. In fact, New York courts find liability only for uses involving an individual’s full name, and not just a surname, nickname, or business name. The statute does, however, protect certain “stage names” in limited situations, such as when the individual has become known by a stage name virtually to the exclusion of his or her real name.
The Plaintiff and the Video Game
Phillip Champion is a prominent street basketball entertainer known professionally as “Hot Sauce.” Champion claims that he is widely recognized as both “Hot Sauce” and “Hot Sizzle” in social media, and is regularly depicted on television and in blogs, movies, YouTube videos, sports magazines and live halftime shows. As a result, Champion is able to license his celebrity persona through sponsorships and endorsement deals with prominent consumer brands like AND1.
Photographs of Champion filed with the Court.
Take-Two created the NBA 2K18 basketball simulation video game, which realistically depicts the on-court competition and off-court management of the National Basketball Association. Users can create a custom player avatar, or select from existing player avatars modeled after real-life professional athletes. The game’s “MyCareer” mode allows the user to create a custom basketball player, and then design and play through the character’s entire career, competing in games and participating in off-court activities. The “Neighborhoods” option, which ties to the off-court activities in the MyCareer mode, lets users explore an off-court urban world while interacting with other basketball players—most of which are non-playable characters controlled by the computer—in scenarios like exercising in public gyms and playing casual basketball games on city courts.
Champion’s Claims
Champion’s lawsuit stems from one of the non-playable characters in the game’s Neighborhood mode, who is depicted as a young, African-American male with a mohawk, wearing all-white hi-top sneakers, a tank-top, and black shorts with white piping. On the front and back of the tank-top is the numeral “1,” and on the back are the words “Hot Sizzles.”
Images of the “Hot Sizzles” avatar filed with the Court.
Champion alleged that the look of the “Hot Sizzles” avatar incorporated numerous personal aspects of his life and identity in violation of the New York publicity statute, and further that the avatar’s “Hot Sizzles” name was itself a violation because Champion is “ubiquitously” known as “Hot Sizzle.” Take-Two responded that its “Hot Sizzles” avatar does not sufficiently resemble Champion, whether in name or image, under New York law.
On Champion’s claims to his likeness, the Court found no physical resemblance between Champion and the “Hot Sizzle” avatar, and determined that the only reasonable commonalities are that “both are male, African-American in appearance, and play basketball.” The Court compared this case to two similar cases (Lohan v. Take-Two Interactive Software, Inc. and Gravano v. Take-Two Interactive Software, Inc. ), both involving Take-Two’s Grand Theft Auto video game, in which the avatars exhibited many closer similarities to the plaintiffs in clothes, hair, poses, voice, and life stories. Finding no similar likenesses in this case, the Court ruled that, at least from a visual perspective, the Hot Sizzles avatar in NBA 2K18 is not recognizable as Champion as a matter of law.
On Champion’s claim to the name “Hot Sizzles,” the Court recognized that the use of a person’s celebrity or “stage” name with a video game avatar could aid in recognition of the avatar as that person’s likeness. However, the Court determined that Champion’s “primary performance persona” is actually “Hot Sauce,” which is entirely distinct from the NBA 2K18 avatar’s name, “Hot Sizzles.” Champion was not able to show that he is widely known as “Hot Sizzle” to the public at large—as opposed to just in the sporting or gaming circles—so the Court ruled that, without this level of connection between Champion and the name “Hot Sizzle,” Take-Two’s use of “Hot Sizzles” does not aid in the visual recognition of the NBA 2K18 avatar as Champion.[2]
Incidental Use and the First Amendment
Take-Two also defended against Champion’s claims by alleging that the “Hot Sizzles” character falls within the “incidental use” exception to liability under New York’s statute. After reviewing the NBA 2K18 game content and related advertising, the Court seemed to agree that the avatar “is a peripheral non-controllable character” that “adds nothing of true substance to a user’s experience in the game.” However, the Court declined to make an affirmative ruling on this component of the lawsuit.
Finally, Take-Two argued that its NBA 2K18 game is protected speech under the First Amendment, and as such, it does not constitute “advertising or trade” under New York’s law. In response, the Court declared that, while video games may conceptually qualify for free speech protection, not every video game constitutes “free speech” fiction or satire. In comparing NBA 2K18 to games that contain a detailed story with pre-defined characters, dialogue and unique environments created entirely by the game designers, the Court determined that here, the users create their own basketball career and completely define their character. Accordingly, the Court found that categorizing NBA 2K18 as “protected fiction or satire” under the First Amendment is “untenable.”
What it Means
As novel sponsorship and endorsement opportunities are created through the advent of esports, advertisers, game developers, and event promoters must be certain they have the appropriate content and publicity licenses in place. However, because publicity laws, in particular, are enforced at the state level, doing this without expert guidance can be daunting. Using the right tools and a proactive licensing and review strategy, brands and marketing agencies can capture (and keep) a broader share of the esports industry’s revenues, and keep the competition on the court, not in it.
[1] New York Civil Rights Law, §§ 50-51.
[2] The Court determined that “Hot Sizzle” is, at best, Champion’s secondary “nickname.”
The sex police are out there on the streets
Make sure the pass laws are not broken
Undercover (of the Night), The Rolling Stones
So, now we know that browsing porn in “incognito” mode doesn’t prevent those sites from leaking your dirty data courtesy of the friendly folks at Google and Facebook. 93 per cent of porn sites leak user data to a third party. Of these, Google tracks about 74 per cent of the analyzed porn sites, while Oracle tracks nearly 24 per cent sites and Facebook tracks nearly 10 per cent porn sites. Yet, despite such stats, 30 per cent of all internet traffic still relates to porn sites.
The hacker who perpetrated the enormous Capital One data beach outed herself by oversharing on GitHub. Had she been able to keep her trap shut, we’d probably still not know that she was in our wallets. Did she want to get caught, or was she simply unashamed of having stolen a Queen’s ransom worth of financial data?
Many have lamented that shame (along with irony, truth and proper grammar) is dead. I disagree. I think that shame has been on the outward leg of a boomerang trajectory fueled by technology and is accelerating on the return trip to whack us noobs in the back of our unsuspecting heads.
Technology has allowed us to do all sorts of stuff privately that we used to have to muster the gumption to do in public. Buying Penthouse the old-fashioned way meant you had to brave the drugstore cashier, who could turn out to be a cheerleader at your high school or your Mom’s PTA friend. Buying the Biggie Bag at Wendy’s meant enduring the disapproving stares of vegans buying salads and diet iced tea. Let’s not even talk about ED medication or baldness cures.
All your petty vices and vanity purchases can now be indulged in the sanctity of your bedroom. Or so you thought. There is no free lunch, naked or otherwise, we are coming to find. How will society respond?
Country music advises us to dance like no one is watching and to love like we’ll never get hurt. When we are alone, we can act closer to our baser instincts. This is why privacy is protective of creativity and subversive behaviors, and why in societies without privacy, people’s behavior regresses toward the most socially acceptable responses. As my partner Ted Claypoole wrote in Privacy in the Age of Big Data,
“We all behave differently when we know we are being watched and listened to, and the resulting change in behavior is simply a loss of freedom – the freedom to behave in a private and comfortable fashion; the freedom to allow the less socially -careful branches of our personalities to flower. Loss of privacy reduces the spectrum of choices we can make about the most important aspects of our lives.
By providing a broader range of choices, and by freeing our choices from immediate review and censure from society, privacy enables us to be creative and to make decisions about ourselves that are outside the mainstream. Privacy grants us the room to be as creative and thought-provoking as we want to be. British scholar and law dean Timothy Macklem succinctly argues that the “isolating shield of privacy enables people to develop and exchange ideas, or to foster and share activities, that the presence or even awareness of other people might stifle. For better and for worse, then, privacy is a sponsor and guardian to the creative and the subversive.”
For the past two decades we have let down our guard, exercising our most subversive and embarrassing expressions of id in what we thought was a private space. Now we see that such privacy was likely an illusion, and we feel as if we’ve been somehow gas lighted into showing our noteworthy bad behavior in the disapproving public square.
Exposure of the Ashley Madison affair-seeking population should have taught us this lesson, but it seems that each generation needs to learn in its own way.
The nerds will, inevitably, figure out how to continue to work and play largely unobserved. But what of the rest of us? Will the pincer attack of the advancing surveillance state and the denizens of the Dark Web bring shame back as a countervailing force to govern our behavior? Will the next decade be marked as the New Puritanism?
Dwight Lyman Moody, a predominant 19th century evangelist, author, and publisher, famously said, “Character is what you are in the dark.” Through the night vision goggles of technology, more and more of your neighbors can see who you really are and there are very few of us who can bear that kind of scrutiny. Maybe Mick Jagger had it right all the way back in 1983, when he advised “Curl up baby/Keep it all out of sight.” Undercover of the night indeed.
This week, the Federal Trade Commission (FTC) entered into a proposed settlement with Unrollme Inc. (“Unrollme”), a free personal email management service that offers to assist consumers in managing the flood of subscription emails in their inboxes. The FTC alleged that Unrollme made certain deceptive statements to consumers, who may have had privacy concerns, to persuade them to grant the company access to their email accounts. (In re Unrolllme Inc., File No 172 3139 (FTC proposed settlement announced Aug. 8, 2019).
This settlement touches many relevant issues, including the delicate nature of online providers’ privacy practices relating to consumer data collection, the importance for consumers to comprehend the extent of data collection when signing up for and consenting to a new online service or app, and the need for downstream recipients of anonymized market data to understand how such data is collected and processed. (See also our prior post covering an enforcement action involving user geolocation data collected from a mobile weather app).
A quick glance at headlines announcing the settlement might give the impression that the FTC found Unrollme’s entire business model unlawful or deceptive, but that is not the case. As described below, the settlement involved only a subset of consumers who received allegedly deceptive emails to coax them into granting access to their email accounts. The model of providing free products or services in exchange for permission to collect user information for data-driven advertising or ancillary market research remains widespread, though could face some changes when California’s CCPA consumer choice options become effective or in the event Congress passes a comprehensive data privacy law.
As part of the Unrollme registration process, users grant Unrollme access to selected personal email accounts for decluttering purposes. However, this permission also allows Unrollme to access and scan inboxes for so-called “e-receipts” or emailed receipts from e-commerce transactions. After scanning users’ e-receipt data (which might include billing and shipping addresses and information about the purchased products or services), Unrollme’s parent company, Slice Technologies, Inc., would anonymize the data and package it into market research reports that are sold to various companies, retailers and others. According to the FTC complaint, when some consumers declined to grant permission to their email accounts during signup, Unrollme, during the relevant time period, tried to make them reconsider by sending allegedly deceptive statements about its access (e.g, “You need to authorize us to access your emails. Don’t worry, this is just to watch for those pesky newsletters, we’ll never touch your personal stuff”). The FTC claimed that such messages did not tell users that access to their inboxes would also be used to collect e-receipts and to package that data for sale to outside companies, and that thousands of consumers changed their minds and signed up for Unrollme.
As part of the settlement, Unrollme is prohibited from misrepresentations about the extent to which it accesses, collects, uses, stores or shares information in connection with its email management products. Unrollme must also send an email to all current users who enrolled in Unrollme after seeing the allegedly deceptive statements and explain Unrollme’s data collection and usage practices. Unrollme is also required to delete all e-receipt data obtained from recipients who enrolled in Unrollme after seeing the challenged statements (unless Unrollme receives affirmative consent to maintain such data from the affected consumers).
In an effort at increased transparency, Unrollme’s current home page displays several links to detailed explanations of how the service collects and analyzes user data (e.g., “How we use data”).
Interestingly, this is not the first time Unrollme’s practices have been challenged, as the company faced a privacy suit over its data mining practices last year. (See Cooper v. Slice Technologies, Inc., No. 17-7102 (S.D.N.Y. June 6, 2018) (dismissing a privacy suit that claimed that Unrollme did not adequately disclose to consumers the extent of its data mining practices, and finding that consumers consented to a privacy policy that expressly allowed such data collection to build market research products and services).
Well, no one can say that he did not get his day in Court.
Plaintiff Ewing, a serial TCPA litigator who filed yet another case assigned to Judge Battaglia, narrowly escaped dismissal of all his claims, and was permitted leave to amend for a second time. SeeStark v. Stall, Case No. 19-CV-00366-AJB-NLS, 2019 U.S. Dist. LEXIS 132814 (S.D. Cal. Aug. 7, 2019). But in the process, the Judge called attention to the Plaintiff’s unprofessional conduct in an earlier case, ruled that he failed to name a necessary party, and found that he inadequately plead the existence of an agency relationship between the defendant and the necessary party that he had failed to join in the lawsuit.
At the outset, the court dismissed the claim brought by co-plaintiff Stark, as the Complaint contained no allegations that any wrongful telephone calls were placed to that particular individual.
In 2015, Ewing had already been put on notice of the local rules of professionalism and their applicability to him, despite his status as a pro se litigator. Thus, the Court easily granted defendant’s motion to strike Plaintiff’s allegations to the effect that defendant had made a “derogatory remark” simply by pointing out that he was designated as a vexatious litigator.
The two most important pieces of the case for TCPAWorld are the Court’s rulings about Plaintiff’s failure to join a necessary defendant and his insufficient allegations to establish vicarious liability.
Plaintiff had failed to name as a defendant the entity (US Global) that allegedly made the calls to him. The court determined that this company is a necessary party that must be added in order for the court to afford complete relief among the parties. We often see situations where only a caller but not a seller, creditor, employer, franchisor, etc. are named, or vice versa, so it is encouraging to see courts strictly enforce Federal Rule 15 in the TCPA context.
The court further held that the relationship between Defendant and US Global was not such that Defendant could be held liable for violations of the TCPA that were committed by US Global. While Plaintiff made unsubstantiated allegations that an agency relationship existed, the Court treated these as merely legal conclusions and granted dismissal based on insufficient allegations of facts to establish a plausible claim that there is a common-law agency relationship between Defendant and US Global. Simply stated, the bare allegation that Defendant had the ability to control some aspects of the caller’s activity was insufficient to establish control for purposes of TCPA vicarious liability principles.
Plaintiff’s amended pleading is due on August 31—anticipating another round of motion practice, we will track any further developments in this case.
Hyperconnectivity is a real phenomenon and it is changing the concerns of society because of the kinds of interactions that can be brought about by IoT devices, which could be: i) People to people; ii) People to things (objects, machines); iii) Things/machines to things/machines.
It gives rise to different issues for people. According to a European Survey, 72% of EU Internet users worry that too much of their personal data is being shared online and that they have little control over what happens to this information[1]. It gives rise to inevitable ethical issues and its relationship with the techno environment.
The discussion on ethics that follows aims to provide a quick tour on general ethical principles and theories that are available as they may apply to IoT[2]. Law and ethics are overlapping, but ethics goes beyond law. Thus, a comparison of law and ethics is made and their differences are pointed out in the great work of Spyros G Tzafestas, who wrote Ethics and Law in the Internet of Things World. In this article, he considers that the risks and harms in a digital world are very high and complex, especially explaining those tech terms and their impact in our private life. Thus, it is of primary importance to review IoT and understand the limitations of protective legal, regulatory and ethical frameworks, in order to provide sound recommendations for maximizing good and minimizing harm[3].
Major data security concerns have also been raised with respect to ‘cloud’-supported IoT. Cloud computing (‘the cloud’) essentially consists of the concentration of resources, e.g. hardware and software, into a few physical locations by a cloud service provider (e.g. Amazon Web Service)[4]. We are living in a data-sharing storm and the economic impact of IoT’s cyber risks is increasing with the integration of digital infrastructure in the digital economy[5]. We are surrounded by devices which contain our data, for instance:
Wearable health technologies: wearable devices that continuously monitor the health status of a patient or gather real-world information about the patient such as heart rate, blood pressure, fever;
Wearable textile technologies: clothes that can change their color on demand or based on the biological condition of the wearer or according to the wearer’s emotions;
As a result of the serious impact IoT may have and because it involves a huge number of connected devices, it creates a new social, political, economic, and ethical landscape. Therefore, for a sustainable development of IoT, political and economic decision-making bodies have to develop proper regulations in order to be able to control the fair use of IoT in society.
In this sense, the most developed regions as regards establishing IoT Regulations and an ethical framework are the European Union and the United States both of which have enacted:
Legislation/regulations.
Ethics principles, rules and codes.
Standards/guidelines;
Contractual arrangements;
Regulations for the devices connected;
Regulations for the networks and their security; and
Regulations for the data associated with the devices.
In light of this, the next section will deal with Data Protection Regulations, Consumer Protection Acts, IoT and Cyber Risks Laws, Roadmap for Standardization of Regulations, Risk Maturity, Strategy Design and Impact Assessment related with 2020 scenario, which is: 200 billion sensor devices and market size that, by 2025, will be between $2.7 trillion and $3 trillion a year.
Europe
The Alliance for Internet of Things Innovation (AIOTI) was initiated by the European Commission in order to open a stream of dialogue between European stakeholders within the Internet of Things (IoT) market. The overall goal of this initiative was the creation of a dynamic European IoT ecosystem to unleash the potential of IoT.
In October 2015, the Alliance published 12 reports covering IoT policy and standards issues. It provided detailed recommendations for future collaborations in the Internet of Things Focus Area of the 2016-2017 Horizon 2020 programme[7].
The IoT regulation framework in Europe is a growth sector:
EU Directive-2013/40: this Directive deals with “Cybercrime” (i.e., attacks against information systems). It provides definitions of criminal offences and sets proper sanctions for attacks against information systems[8].
EU NIS Directive 2016/1148: this Network and Information Security (NIS) Directive concerns “Cybersecurity” issues. Its aim is to provide legal measures to assure a common overall level of cybersecurity (network/information security) in the EU, and an enhanced coordination degree among EU Members[9].
EU Directive 2014/53: this Directive “On the harmonization of the laws of the member states relating to the marketing of radio equipment”[10] is concerned with the standardization issue which is important for the joint and harmonized development of technology in the EU.
EU GDPR: European General Data Protection Regulation 2016/679: this regulation concerns privacy, ownership, and data protection and replaces EU DPR-2012. It provides a single set of rules directly applicable in the EU member states.
EU Connected Communities Initiative: this initiative concerns the IoT development infrastructure, and aims to collect information from the market about existing public and private connectivity projects that seek to provide high-speed broadband (more than 30 Mbps).
United States
A quick overview of the general US legislation that protects civil rights (employment, housing, privacy, information, data, etc.) includes:
Fair Housing Act (1968);
Fair Credit Reporting Act (1970);
Electronic Communication Privacy Act (1986), which is applied to service providers that transmit data, the Privacy Act 1974 which is based on the Fair Information Practice Principle (FIPP) Guidelines;
Breach Notification Rule which requires companies utilizing health data to notify consumers that are affected by the occurrence of any data breach; and
IoT Cybersecurity Improvement Act 2019: the Bill seeks “[t]o leverage Federal Government procurement power to encourage increased cybersecurity for Internet of Things devices.” In other words, this bill aims to shore up cybersecurity requirements for IoT devices purchased and used by the federal government, with the aim of affecting cybersecurity on IoT devices more broadly.
SB-327 Information privacy: connected devices: California’s new SB 327 law, which will take effect in January 2020, requires all “connected devices” to have a “reasonable security feature.”
The above legislation is general, and in principle can cover IoT activities, although it was not designed with IoT in mind. Legislation devoted particularly to IoT includes the following:
White House Initiative 2012: the purpose of this initiative is to specify a framework for protecting the privacy of the consumer in a networked work.
This initiative involves a report on a ‘Consumer Bill of Rights” which is based on the so-called “Fair Information Practice Principles” (FIPP). This includes two principles:
Respect for Context Principle: consumers have a right to insist that the collection, use, and disclosure of personal data by Companies is done in ways that are compatible with the context in which consumers provide the data;
Individual Control Principle: consumers have a right to exert control over the personal data companies collect from them or how they use it.
China
Where we start to see the most advanced picture is in China. In 2017, the Ministry of Industry and Information Technology (MIIT), China’s telecom regulator and industrial policy maker, issued the Circular on Comprehensively Advancing the Construction and Development of Mobile Internet of Things (NB-IoT) (MIIT Circular [2017] No. 351, the “Circular”), with the following approach in the opening provisions:
Building a wide-coverage, large-connect, low-power mobile Internet of Things (NB-IoT) infrastructure and developing applications based on NB-IoT technology will help promote the construction of network powers and manufacturing powers, and promote “mass entrepreneurship, innovation” and “Internet +” development. In order to further strengthen the IoT application infrastructure, promote the deployment of NB-IoT networks and expand industry applications, and accelerate the innovation and development of NB-IoT[11]
Nowadays China already has a huge packet of regulation on technological matters:
2015 State Council – China Computer Information System Security Protection Regulation (first in 1994);
2007 MPS – Management Method for Information Security Protection for Classified Levels;
2001 NPC Standing Committee – Resolution about Protection of Internet Security;
2012 NPC Standing Committee – Resolution about Enhance Network Information Protection;
July 2015: National Security Law – ‘secure and controllable’ systems and data security in critical infrastructure and key areas;
2014 MIIT – Guidance on Enhance Telecom and Internet Security;
2013 MIIT – Regulation about Telecom and Internet Personal Information Protection
2014 China Banking Regulatory Commission – Guidance for Applying Secure and Controllable Information;
Technology to Enhance Banking Industry Cybersecurity and Informatization Development
Further, as if this were not enough, the Chinese government is being proactive and has several important laws and regulations in the Pipeline, as it can be seen from the list below:
CAC: Administrative Measures on Internet Information Services;
CAC Rules on Security Protection for Critical Information Infrastructure;
Cybersecurity Law;
Cyber Sovereignty;
Security of Product and Service;
Security of Network Operation (Classified Levels Protection, Critical Infrastructure);
Data Security (Category, Personal Information);
Information Security.
Finally, China established, in 2016, the National Information Security Standardization Technical Committee and its current work is developing a Standardization – TC260 (IT Security) on Technical requirement for Industrial network protocol and general reference model and requirements for Machine-to-Machine (M2M) security.
Latin America
The Latin American countries have different levels of development and this sets up a huge asymmetry between the domestic legal frameworks. The following is a quick regulation overview on Latin American countries:
Brazil has the “National IoT Plan” (Decree N. 9.854/2019) that aims to ensure the development of public policies for this technology sector and members of Brazilian parliament presented the bill No. 7.656/17 with the purpose of eliminating tax charges on IoT products;
Colombia has a Draft of Law No. 152/2018 on the Modernization of the Information and Communication providing investments incentives to IT Techs (article 3);
Chile has a new Draft Law Boletín N° 12.192-25/2018 on Cyber crimes and regulation on internet devices and hackers attacks;
In 2017, Argentina launched a Public Consultation on IoT regarding regulations that must be updated and how to get more security and improve the technological level of the country[12].
Most Promising Smart Environments
Smart environments are regarded as the space within which IoT devices interact connected through a continuous network. Thus, smart environments aim to satisfy the experience of individuals from every environment, by replacing the hazardous work, physical labor and repetitive tasks with automated agents. Generally speaking, sensors are the basis of these kind of smart devices with many different applications e.g. Smart Parking, Waste Management, Smart Roads and Traffic Congestion, Air Pollution, River Floods, M2M Applications, Vehicle auto-diagnosis, Smart Farming, Energy and Water Uses, Medical and Health Smart applications, etc[13].
Another way of looking at smart environments and assess their relative capacity to produce business opportunities is to identify and examine the most important IoT use cases that are either already being exploited or will be fully exploited by 2020.
For the purposes of this article, the approach was restricted to sectors consisting of the most promising smart environments to be developed up to 2020 in the European Market as displayed in the Chart below:
The conclusions of the last report of the European Commission are impressive and can help to understand the continuous development of the IoT market and how every market has to comply with the law and they will emerge facing a regulatory avalanche as mentioned in item 2 on the Regulatory Ecosystem.
Final Considerations: IoT as Consumer Product Health and Safety
IoT safety is becoming more important every day. On the one hand, as mentioned above, most concerns for IoT safety are primarily in the areas of cyber-attacks, hacking, data privacy, and similar topics; what is better referred to as security than safety. On the other hand, it can be approached by physical safety hazards which may result from the operation of consumer products in an IoT environment or system. IoT provides a new way to approach business and it is not restricted to one or other market or topic. It is a metatopic ormetamarketshowing different possibilities and applications and will be spread in the near future.
In general, IoT products are electrical or electronic applications with a power source and a battery connected by a charging device. So long as the power source, batteries and charging devices are present we have the usual risks of electrical related hazards (fire, burns, electrical shock, etc.). Nonetheless, IoT makes matters more complicated as smart devices have the function to send commands and control devices in the real world.
IoT applications can switch the main electrical powers of secondary products or can operate complex motor systems and so on. Then they have to be accurate and might provide minimal requirements to care of consumer health and safety. Risk assessment and hazard mitigations will have to adapt to IoT applications reinventing new methods to assure regular standards of IoT usability. Traditional health and safety regulations might be up to date with this new technological reality to be effective at reducing safety hazards for consumer products.
To conclude, this article was intended to summarize two main issues: I) IoT as an increasing and cross topic market which will become a present reality closer to our daily lives; II) IoT will be regulated and become an important concern in consumer product health and safety.
[1] Nóra Ni Loideain. Port in the Data-Sharing Storm: The GDPR and the Internet of Things. King’s College London Dickson Poon School of Law Legal Studies Research Paper Series: Paper No. 2018-27.P2.
[4] Nóra Ni Loideain. Port in the Data-Sharing Storm: The GDPR and the Internet of Things. King’s College London Dickson Poon School of Law Legal Studies Research Paper Series: Paper No. 2018-27.P. 19.
[5] Petar Radanliev, David Charles De Roure and others. Definition of Internet of Things (IoT) Cyber Risk – Discussion on a Transformation Roadmap for Standardization of Regulations, Risk Maturity, Strategy Design and Impact Assessment. Oxford University. MPRA Paper No. 92569, March 2019, P. 1.
If you think there is safety in numbers when it comes to the privacy of your personal information, think again. A recent study in Nature Communications found that, given a large enough dataset, anonymised personal information is only an algorithm away from being re-identified.
Anonymised data refers to data that has been stripped of any identifiable information, such as a name or email address. Under many privacy laws, anonymising data allows organisations and public bodies to use and share information without infringing an individual’s privacy, or having to obtain necessary authorisations or consents to do so.
But what happens when that anonymised data is combined with other data sets?
Researchers behind the Nature Communications study found that using only 15 demographic attributes can re-identify 99.98% of Americans in any incomplete dataset. While fascinating for data analysts, individuals may be alarmed to hear that their anonymised data can be re-identified so easily and potentially then accessed or disclosed by others in a way they have not envisaged.
Re-identification techniques were recently used by the New York Times. In March this year, they pulled together various public data sources, including an anonymised dataset from the Internal Revenue Service, in order to reveal a decade’s worth of Donald Trump’s negatively adjusted income tax returns. His tax returns had been the subject of great public speculation.
What does this mean for business? Depending on the circumstances, it could mean that simply removing personal information such as names and email addresses is not enough to anonymise data and may be in breach of many privacy laws.
To address these risks, companies like Google, Uber and Apple use “differential privacy” techniques, which adds “noise” to datasets so that individuals cannot be re-identified, while still allowing access to the information outcomes they need.
It is a surprise for many businesses using data anonymisation as a quick and cost effective way to de-personalise data that more may be needed to protect individuals’ personal information.
As we head toward 2020, expect significant public debate relating to smartphone applications designed to increase turnout and participation in upcoming elections. The Democratic Party has dipped its toe in the water by announcing in July plans to allow telephone voting in lieu of appearing for neighborhood caucus meetings in the key early primary states of Iowa and Nevada.
Given concerns regarding security and reliability of submitting votes over the internet, jurisdictions around the country have begun to test solutions involving blockchain technology to allow absentee voters to submit voting ballots. Following initial pilot programs in Denver and West Virginia, Utah County, Utah will be the next jurisdiction to utilize a blockchain-based mobile in connection with its upcoming municipal primary and general elections.
The pilot program, which will utilize the mobile voting application “Voatz”, will allow active-duty military, their eligible dependents and overseas voters to cast absentee ballots. Eligible voters will need to apply for an absentee ballot with the county clerk and then download the mobile application. The ballot itself will be unlocked using the smartphone’s biometric data (i.e., a fingerprint or facial recognition) and then will be distributed into the blockchain framework for tabulation.