EDPB on Dark Patterns: Lessons for Marketing Teams

“Dark patterns” are becoming the target of EU data protection authorities, and the new guidelines of the European Data Protection Board (EDPB) on “dark patterns in social media platform interfaces” confirm their focus on such practices. While they are built around examples from social media platforms (real or fictitious), these guidelines contain lessons for all websites and applications. The bad news for marketers: the EDPB doesn’t like it when dry legal texts and interfaces are made catchier or more enticing.

To illustrate, in a section of the guidelines regarding the selection of an account profile photo, the EDPB considers the example of a “help/information” prompt saying “No need to go to the hairdresser’s first. Just pick a photo that says ‘this is me.’” According to the EDPB, such a practice “can impact the final decision made by users who initially decided not to share a picture for their account” and thus makes consent invalid under the General Data Protection Regulation (GDPR). Similarly, the EDPB criticises an extreme example of a cookie banner with a humourous link to a bakery cookies recipe that incidentally says, “we also use cookies”, stating that “users might think they just dismiss a funny message about cookies as a baked snack and not consider the technical meaning of the term “cookies.”” The EDPB even suggests that the data minimisation principle, and not security concerns, should ultimately guide an organisation’s choice of which two-factor authentication method to use.

Do these new guidelines reflect privacy paranoia or common sense? The answer should lie somewhere in between, but the whole document (64 pages long) in our view suggests an overly strict approach, one that we hope will move closer to commonsense as a result of a newly started public consultation process.

Let us take a closer look at what useful lessons – or warnings – can be drawn from these new guidelines.

What are “dark patterns” and when are they unlawful?

According to the EDPB, dark patterns are “interfaces and user experiences […] that lead users into making unintended, unwilling and potentially harmful decisions regarding the processing of their personal data” (p. 2). They “aim to influence users’ behaviour and can hinder their ability to effectively protect their personal data and make conscious choices.” The risk associated with dark patterns is higher for websites or applications meant for children, as “dark patterns raise additional concerns regarding potential impact on children” (p. 8).

While the EDPB takes a strongly negative view of dark patterns in general, it recognises that dark patterns do not automatically lead to an infringement of the GDPR. The EDPB acknowledges that “[d]ata protection authorities are responsible for sanctioning the use of dark patterns if these breach GDPR requirements” (emphasis ours; p. 2). Nevertheless, the EDPB guidance strongly links the concept of dark patterns with the data protection by design and by default principles of Art. 25 GDPR, suggesting that disregard for those principles could lead to a presumption that the language or a practice in fact creates a “dark pattern” (p. 11).

The EDPB refers here to its Guidelines 4/2019 on Article 25 Data Protection by Design and by Default and in particular to the following key principles:

  • “Autonomy – Data subjects should be granted the highest degree of autonomy possible to determine the use made of their personal data, as well as autonomy over the scope and conditions of that use or processing.
  • Interaction – Data subjects must be able to communicate and exercise their rights in respect of the personal data processed by the controller.
  • Expectation – Processing should correspond with data subjects’ reasonable expectations.
  • Consumer choice – The controllers should not “lock in” their users in an unfair manner. Whenever a service processing personal data is proprietary, it may create a lock-in to the service, which may not be fair, if it impairs the data subjects’ possibility to exercise their right of data portability in accordance with Article 20 GDPR.
  • Power balance – Power balance should be a key objective of the controller-data subject relationship. Power imbalances should be avoided. When this is not possible, they should be recognised and accounted for with suitable countermeasures.
  • No deception – Data processing information and options should be provided in an objective and neutral way, avoiding any deceptive or manipulative language or design.
  • Truthful – the controllers must make available information about how they process personal data, should act as they declare they will and not mislead data subjects.”

Is data minimisation compatible with the use of SMS two-factor authentication?

One of the EDPB’s positions, while grounded in the principle of data minimisation, undercuts a security practice that has grown significantly over the past few years. In effect, the EDPB seems to question the validity under the GDPR of requests for phone numbers for two-factor authentication where e-mail tokens would theoretically be possible:

“30. To observe the principle of data minimisation, [organisations] are required not to ask for additional data such as the phone number, when the data users already provided during the sign- up process are sufficient. For example, to ensure account security, enhanced authentication is possible without the phone number by simply sending a code to users’ email accounts or by several other means.
31. Social network providers should therefore rely on means for security that are easier for users to re[1]initiate. For example, the [organisation] can send users an authentication number via an additional communication channel, such as a security app, which users previously installed on their mobile phone, but without requiring the users’ mobile phone number. User authentication via email addresses is also less intrusive than via phone number because users could simply create a new email address specifically for the sign-up process and utilise that email address mainly in connection with the Social Network. A phone number, however, is not that easily interchangeable, given that it is highly unlikely that users would buy a new SIM card or conclude a new phone contract only for the reason of authentication.” 
(emphasis ours; p. 15)

The EDPB also appears to be highly critical of phone-based verification in the context of registration “because the email address constitutes the regular contact point with users during the registration process” (p. 15).

This position is unfortunate, as it suggests that data minimisation may preclude controllers from even assessing which method of two-factor authentication – in this case, e-mail versus SMS one-time passwords – better suits its requirements, taking into consideration the different security benefits and drawbacks of the two methods. The EDPB’s reasoning could even be used to exclude any form of stronger two-factor authentication, as additional forms inevitably require separate processing (e.g., phone number or third-party account linking for some app-based authentication methods).

For these reasons, organisations should view this aspect of the new EDPB guidelines with a healthy dose of skepticism. It likewise will be important for interested stakeholders to participate in the consultation to explain the security benefits of using phone numbers to keep the “two” in two-factor authentication.

Consent withdrawal: same number of clicks?

Recent decisions by EU regulators (notably two decisions by the French authority, the CNIL have led to speculation about whether EU rules effectively require website operators to make it possible for data subjects to withdraw consent to all cookies with one single click, just as most websites make it possible to give consent through a single click. The authorities themselves have not stated that this is unequivocally required, although privacy activists notably filed complaints against hundreds of websites, many of them for not including a “reject all” button on their cookie banner.

The EDPB now appears to side with the privacy activists in this respect, stating that “consent cannot be considered valid under the GDPR when consent is obtained through only one mouse-click, swipe or keystroke, but the withdrawal takes more steps, is more difficult to achieve or takes more time” (p. 14).

Operationally, however, it seems impossible to comply with a “one-click withdrawal” standard in absolute terms. Just pulling up settings after registration or after the first visit to a website will always require an extra click, purely to open those settings. We expect this issue to be examined by the courts eventually.

Is creative wording indicative of a “dark pattern”?

The EDPB’s guidelines contain several examples of wording that is intended to convince the user to take a specific action.

The photo example mentioned in the introduction above is an illustration, but other (likely fictitious) examples include the following:

  • For sharing geolocation data: “Hey, a lone wolf, are you? But sharing and connecting with others help make the world a better place! Share your geolocation! Let the places and people around you inspire you!” (p.17)
  • To prompt a user to provide a self-description: “Tell us about your amazing self! We can’t wait, so come on right now and let us know!” (p. 17)

The EDPB criticises the language used, stating that it is “emotional steering”:

“[S]uch techniques do not cultivate users’ free will to provide their data, since the prescriptive language used can make users feel obliged to provide a self-description because they have already put time into the registration and wish to complete it. When users are in the process of registering to an account, they are less likely to take time to consider the description they give or even if they would like to give one at all. This is particularly the case when the language used delivers a sense of urgency or sounds like an imperative. If users feel this obligation, even when in reality providing the data is not mandatory, this can have an impact on their “free will”” (pp. 17-18).

Similarly, in a section about account deletion and deactivation, the EDPB criticises interfaces that highlight “only the negative, discouraging consequences of deleting their accounts,” e.g., “you’ll lose everything forever,” or “you won’t be able to reactivate your account” (p. 55). The EDPB even criticises interfaces that preselect deactivation or pause options over delete options, considering that “[t]he default selection of the pause option is likely to nudge users to select it instead of deleting their account as initially intended. Therefore, the practice described in this example can be considered as a breach of Article 12 (2) GDPR since it does not, in this case, facilitate the exercise of the right to erasure, and even tries to nudge users away from exercising it” (p. 56). This, combined with the EDPB’s aversion to confirmation requests (see section 5 below), suggests that the EDPB is ignoring the risk that a data subject might opt for deletion without fully recognizing the consequences, i.e., loss of access to the deleted data.

The EDPB’s approach suggests that any effort to woo users into giving more data or leaving data with the organisation will be viewed as harmful by data protection authorities. Yet data protection rules are there to prevent abuse and protect data subjects, not to render all marketing techniques illegal.

In this context, the guidelines should in our opinion be viewed as an invitation to re-examine marketing techniques to ensure that they are not too pushy – in the sense that users would in effect truly be pushed into a decision regarding personal data that they would not otherwise have made. Marketing techniques are not per se unlawful under the GDPR but may run afoul of GDPR requirements in situations where data subjects are misled or robbed of their choice.

Other key lessons for marketers and user interface designers

  • Avoid continuous prompting: One of the issues regularly highlighted by the EDPB is “continuous prompting”, i.e., prompts that appear again and again during a user’s experience on a platform. The EDPB suggests that this creates fatigue, leading the user to “give in,” i.e., by “accepting to provide more data or to consent to another processing, as they are wearied from having to express a choice each time they use the platform” (p. 14). Examples given by the EDPB include the SMS two-factor authentication popup mentioned above, as well as “import your contacts” functionality. Outside of social media platforms, the main example for most organisations is their cookie policy (so this position by the EDPB reinforces the need to manage cookie banners properly). In addition, newsletter popups and popups about “how to get our new report for free by filling out this form” are frequent on many digital properties. While popups can be effective ways to get more subscribers or more data, the EDPB guidance suggests that regulators will consider such practices questionable from a data protection perspective.
  • Ensure consistency or a justification for confirmation steps: The EDPB highlights the “longer than necessary” dark pattern at several places in its guidelines (in particular pp. 18, 52, & 57), with illustrations of confirmation pop-ups that appear before a user is allowed to select a more privacy-friendly option (and while no such confirmation is requested for more privacy-intrusive options). Such practices are unlawful according to the EDPB. This does not mean that confirmation pop-ups are always unlawful – just that you need to have a good justification for using them where you do.
  • Have a good reason for preselecting less privacy-friendly options: Because the GDPR requires not only data protection by design but also data protection by default, make sure that you are able to justify an interface in which a more privacy-intrusive option is selected by default – or better yet, don’t make any preselection. The EDPB calls preselection of privacy-intrusive options “deceptive snugness” (“Because of the default effect which nudges individuals to keep a pre-selected option, users are unlikely to change these even if given the possibility” p. 19).
  • Make all privacy settings available in all platforms: If a user is asked to make a choice during registration or upon his/her first visit (e.g., for cookies, newsletters, sharing preferences, etc.), ensure that those settings can all be found easily later on, from a central privacy settings page if possible, and alongside all data protection tools (such as tools for exercising a data subject’s right to access his/her data, to modify data, to delete an account, etc.). Also make sure that all such functionality is available not only on a desktop interface but also for mobile devices and across all applications. The EDPB illustrates this point by criticising the case where an organisation has a messaging app that does not include the same privacy statement and data subject request tools as the main app (p. 27).
  • Be clearer in using general language such as “Your data might be used to improve our services”: It is common in most privacy statements to include a statement that personal data (e.g., customer feedback) “can” or “may be used” to improve an organisation’s products and services. According to the EDPB, the word “services” is likely to be “too general” to be viewed as “clear,” and it is “unclear how data will be processed for the improvement of services.” The use of the conditional tense in the example (“might”) also “leaves users unsure whether their data will be used for the processing or not” (p. 25). Given that the EDPB’s stance in this respect is a confirmation of a position taken by EU regulators in previous guidance on transparency, and serves as a reminder to tell data subjects how data will be used.
  • Ensure linguistic consistency: If your website or app is available in more than one language, ensure that all data protection notices and tools are available in those languages as well and that the language choice made on the main interface is automatically taken into account on the data-related pages (pp. 25-26).

Best practices according to the EDPB

Finally, the EDPB highlights some other “best practices” throughout its guidelines. We have combined them below for easier review:

  • Structure and ease of access:
    • Shortcuts: Links to information, actions, or settings that can be of practical help to users to manage their data and data protection settings should be available wherever they relate to information or experience (e.g., links redirecting to the relevant parts of the privacy policy; in the case of a data breach communication to users, to provide users with a link to reset their password).
    • Data protection directory: For easy navigation through the different section of the menu, provide users with an easily accessible page from where all data protection-related actions and information are accessible. This page could be found in the organisation’s main navigation menu, the user account, through the privacy policy, etc.
    • Privacy Policy Overview: At the start/top of the privacy policy, include a collapsible table of contents with headings and sub-headings that shows the different passages the privacy notice contains. Clearly identified sections allow users to quickly identify and jump to the section they are looking for.
    • Sticky navigation: While consulting a page related to data protection, the table of contents could be constantly displayed on the screen allowing users to quickly navigate to relevant content thanks to anchor links.
  • Transparency:
    • Organisation contact information: The organisation’s contact address for addressing data protection requests should be clearly stated in the privacy policy. It should be present in a section where users can expect to find it, such as a section on the identity of the data controller, a rights related section, or a contact section.
    • Reaching the supervisory authority: Stating the specific identity of the EU supervisory authority and including a link to its website or the specific website page for lodging a complaint is another EDPB recommendation. This information should be present in a section where users can expect to find it, such as a rights-related section.
    • Change spotting and comparison: When changes are made to the privacy notice, make previous versions accessible with the date of release and highlight any changes.
  • Terminology & explanations:
    • Coherent wording: Across the website, the same wording and definition is used for the same data protection concepts. The wording used in the privacy policy should match that used on the rest of the platform.
    • Providing definitions: When using unfamiliar or technical words or jargon, providing a definition in plain language will help users understand the information provided to them. The definition can be given directly in the text when users hover over the word and/or be made available in a glossary.
    • Explaining consequences: When users want to activate or deactivate a data protection control, or give or withdraw their consent, inform them in a neutral way of the consequences of such action.
    • Use of examples: In addition to providing mandatory information that clearly and precisely states the purpose of processing, offering specific data processing examples can make the processing more tangible for users
  • Contrasting Data Protection Elements: Making data protection-related elements or actions visually striking in an interface that is not directly dedicated to the matter helps readability. For example, when posting a public message on the platform, controls for geolocation should be directly available and clearly visible.
  • Data Protection Onboarding: Just after the creation of an account, include data protection points within the onboarding experience for users to discover and set their preferences seamlessly. This can be done by, for example, inviting them to set their data protection preferences after adding their first friend or sharing their first post.
  • Notifications (including data breach notifications): Notifications can be used to raise awareness of users of aspects, changes, or risks related to personal data processing (e.g., when a data breach occurs). These notifications can be implemented in several ways, such as through inbox messages, pop-in windows, fixed banners at the top of the webpage, etc.

Next steps and international perspectives

These guidelines (available online) are subject to public consultation until 2 May 2022, so it is possible they will be modified as a result of the consultation and, we hope, improved to reflect a more pragmatic view of data protection that balances data subjects’ rights, security, and operational business needs. If you wish to contribute to the public consultation, note that the EDPB publishes feedback it receives (as a result, we have occasionally submitted feedback on behalf of clients wishing to remain anonymous).

Irrespective of the outcome of the public consultation, the guidelines are guaranteed to have an influence on the approach of EU data protection authorities in their investigations. From this perspective, it is better to be forewarned – and to have legal arguments at your disposal if you wish to adopt an approach that deviates from the EDPB’s position.

Moreover, these guidelines come at a time when the United States Federal Trade Commission (FTC) is also concerned with dark patterns. The FTC recently published an enforcement policy statement on the matter in October 2021. Dark patterns are also being discussed at the Organisation for Economic Cooperation and Development (OECD). International dialogue can be helpful if conversations about desired policy also consider practical solutions that can be implemented by businesses and reflect a desirable user experience for data subjects.

Organisations should consider evaluating their own techniques to encourage users to go one way or another and document the justification for their approach.

© 2022 Keller and Heckman LLP

Texas AG Sues Meta Over Collection and Use of Biometric Data

On February 14, 2022, Texas Attorney General Ken Paxton brought suit against Meta, the parent company of Facebook and Instagram, over the company’s collection and use of biometric data. The suit alleges that Meta collected and used Texans’ facial geometry data in violation of the Texas Capture or Use of Biometric Identifier Act (“CUBI”) and the Texas Deceptive Trade Practices Act (“DTPA”). The lawsuit is significant because it represents the first time the Texas Attorney General’s Office has brought suit under CUBI.

The suit focuses on Meta’s “tag suggestions” feature, which the company has since retired. The feature scanned faces in users’ photos and videos to suggest “tagging” (i.e., identify by name) users who appeared in the photos and videos. In the complaint, Attorney General Ken Paxton alleged that Meta,  collected and analyzed individuals’ facial geometry data (which constitutes biometric data under CUBI) without their consent, shared the data with third parties, and failed to destroy the data in a timely matter, all in violation of CUBI and the DTPA. CUBI regulates the collection and use of biometric data for commercial purposes, and the DTPA prohibits false, misleading, or deceptive acts or practices in the conduct of any trade or commerce.

Among other forms of relief, the complaint seeks an injunction enjoining Meta from violating these laws, a $25,000 civil penalty for each violation of CUBI, and a $10,000 civil penalty for each violation of the DTPA. The suit follows Facebook’s $650 million class-action settlement over alleged violations of Illinois’ Biometric Privacy Act and the company’s discontinuance of the tag suggestions feature last year.

Copyright © 2022, Hunton Andrews Kurth LLP. All Rights Reserved.

BREAKING: Seventh Circuit Certifies BIPA Accrual Question to Illinois Supreme Court in White Castle

Yesterday the Seventh Circuit issued a much awaited ruling in the Cothron v. White Castle litigation, punting to the Illinois Supreme Court on the pivotal question of when a claim under the Illinois Biometric Privacy Act (“BIPA”) accrues.  No. 20-3202 (7th Cir.).  Read on to learn more and what it may mean for other biometric and data privacy litigations.

First, a brief recap of the facts of the dispute.  After Plaintiff started working at a White Castle in Illinois in 2004, White Castle began using an optional, consent-based finger-scan system for employees to sign documents and access their paystubs and computers.  Plaintiff consented in 2007 to the collection of her biometric data and then 11 years later—in 2018—filed suit against White Castle for purported violation of BIPA.

Plaintiff alleged that White Castle did not obtain consent to collect or disclose her fingerprints at the first instance the collection occurred under BIPA because BIPA did not exist in 2007.  Plaintiff asserted that she was “required” to scan her finger each time she accessed her work computer and weekly paystubs with White Castle and that her prior consent to the collection of biometric data did not satisfy BIPA’s requirements.  According to Plaintiff, White Castle violated BIPA Sections 15(b) and 15(d) by collecting, then “systematically and automatically” disclosing her biometric information without adhering to BIPA’s requirements (she claimed she did not consent under BIPA to the collection of her information until 2018). She sought statutory damages for “each” violation on behalf of herself and a putative class.

White Castle before the district court had moved to dismiss the Complaint and for judgment on the pleadings—both of which motions were denied.  The district court sided with Plaintiff, holding that “[o]n the facts set forth in the pleadings, White Castle violated Section 15(b) when it first scanned [Plaintiff’s] fingerprint and violated Section 15(d) when it first disclosed her biometric information to a third party.”  The district court also held that under Section 20 of BIPA, Plaintiff could recover for “each violation.”  The court rejected White Castle’s argument that this was an absurd interpretation of the statute not in keeping with legislative intent, commenting that “[i]f the Illinois legislature agrees that this reading of BIPA is absurd, it is of course free to modify the statue” but “it is not the role of a court—particularly a federal court—to rewrite a state statute to avoid a construction that may penalize violations severely.”

White Castle filed an appeal of the district court’s ruling with the Seventh Circuit.  As presented by White Castle, the issue before the Seventh Circuit was “[w]hether, when conduct that allegedly violates BIPA is repeated, that conduct gives rise to a single claim under Sections 15(b) and 15(d) of BIPA, or multiple claims.”

In ruling yesterday this issue was appropriate for the Illinois Supreme Court, the Seventh Circuit held that “[w]hether a claim accrues only once or repeatedly is an important and recurring question of Illinois law implicating state accrual principles as applied to this novel state statute.  It requires authoritative guidance that only the state’s highest court can provide.”  Here, the accrual issue is dispositive for purposes of Plaintiffs’ BIPA claim.  As the Seventh Circuit recognized, “[t]he timeliness of the suit depends on whether a claim under the Act accrued each time [Plaintiff] scanned her fingerprint to access a work computer or just the first time.”

Interestingly, the Seventh Circuit drew a comparison to data privacy litigations outside the context of BIPA, stating that the parties’ “disagreement, framed differently, is whether the Act should be treated like a junk-fax statute for which a claim accrues for each unsolicited fax, [], or instead like certain privacy and reputational torts that accrue only at the initial publication of defamatory material.”

Several BIPA litigations have been stayed pending a ruling from the Seventh Circuit in White Castle and these cases will remain on pause going into 2022 pending a ruling from the Illinois Supreme Court.  While some had hoped for clarity on this area of BIPA jurisprudence by the end of the year, the Seventh Circuit’s ruling means that this litigation will remain a must-watch privacy case going forward.

Article By Kristin L. Bryan of Squire Patton Boggs (US) LLP

For more data privacy and cybersecurity legal news, click here to visit the National Law Review.

© Copyright 2021 Squire Patton Boggs (US) LLP

Patch Up – Log4j and How to Avoid a Cybercrime Christmas

A vulnerability so dangerous that Cybersecurity and Infrastructure (CISA) Director Jen Easterly called it “one of the most serious [she’s] seen in [her] entire career, if not the most serious” arrived just in time for the holidays. On December 10, 2021, CISA and the director of cybersecurity at the National Security Agency (NSA) began alerting the public of a critical vulnerability within the Apache Log4j Java logging framework. Civilian government agencies have been instructed to mitigate against the vulnerability by Christmas Eve, and companies should follow suit.

The Log4j vulnerability allows threat actors to remotely execute code both on-premises and within cloud-based application servers, thereby obtaining control of the impacted servers. CISA expects the vulnerability to affect hundreds of millions of devices. This is a widespread critical vulnerability and companies should quickly assess whether, and to what extent, they or their service providers are using Log4j.

Immediate Recommendations

  • Immediately upgrade all versions of Apache Log4j to 2.15.0.
  • Ask your service providers whether their products or environment use Log4j, and if so, whether they have patched to the latest version. Helpfully, CISA sponsors a community-sourced GitHub repository with a list of software related to the vulnerability as a reference guide.
  • Confirm your security operations are monitoring internet-facing systems for indicators of compromise.
  • Review your incident response plan and ensure all response team information is up to date.
  • If your company is involved in an acquisition, discuss the security steps taken within the target company to address the Log4j vulnerability.

The versatility of this vulnerability has already attracted the attention of malicious nation-state actors. For example, government-affiliated cybercriminals in Iran and China have a “wish list” (no holiday pun intended) of entities that they are aggressively targeting with the Log4j vulnerability. Due to this malicious nation-state activity, if your company experiences a ransomware attack related to the Log4j vulnerability, it is particularly important to pay attention to potential sanctions-related issues.

Companies with additional questions about the Log4j vulnerability and its potential impact on technical threats and potential regulatory scrutiny or commercial liability are encouraged to contact counsel.

© 2021 Bracewell LLP

Privacy Tip #309 – Women Poised to Fill Gap of Cybersecurity Talent

I have been advocating for gender equality in Cybersecurity for years [related podcast and post].

The statistics on the participation of women in the field of cybersecurity continue to be bleak, despite significant outreach efforts, including “Girls Who Code” and programs to encourage girls to explore STEM (Science, Technology, Engineering and Mathematics) subjects.

Women are just now rising to positions from which they can help other women break into the field, land high-paying jobs, and combat the dearth of talent in technology. Judy Dinn, the new Chief Information Officer of TD Bank NA, is doing just that. One of her priorities is to encourage women to pursue tech careers. She recently told the Wall Street Journal that she “really, really always wants to make sure that female representation—whether they’re in grade school, high school, universities—that that funnel is always full.”

The Wall Street Journal article states that a study by AnitaB.org found that “women made up about 29% of the U.S. tech workforce in 2020.”  It is well known that companies are fighting for tech and cybersecurity talent and that there are many more open positions than talent to fill them. The tech and cybersecurity fields are growing with unlimited possibilities.

This is where women should step in. With increased support, and prioritized recruiting efforts that encourage women to enter fields focused on technology, we can tap more talent and begin to fill the gap of cybersecurity talent in the U.S.

Article By Linn F. Freedman of Robinson & Cole LLP

For more privacy and cybersecurity legal news, click here to visit the National Law Review.

Copyright © 2021 Robinson & Cole LLP. All rights reserved.

Reflections on 2019 in Technology Law, and a Peek into 2020

It is that time of year when we look back to see what tech-law issues took up most of our time this year and look ahead to see what the emerging issues are for 2020.

Data: The Issues of the Year

Data presented a wide variety of challenging legal issues in 2019. Data is solidly entrenched as a key asset in our economy, and as a result, the issues around it demanded a significant level of attention.

  • Clearly, privacy and data security-related data issues were dominant in 2019. The GDPR, CCPA and other privacy regulations garnered much consideration and resources, and with GDPR enforcement ongoing and CCPA enforcement right around the corner, the coming year will be an important one to watch. As data generation and collection technologies continued to evolve, privacy issues evolved as well.  In 2019, we saw many novel issues involving mobilebiometric and connected car  Facial recognition technology generated a fair amount of litigation, and presented concerns regarding the possibility of intrusive governmental surveillance (prompting some municipalities, such as San Francisco, to ban its use by government agencies).

  • Because data has proven to be so valuable, innovators continue to develop new and sometimes controversial technological approaches to collecting data. The legal issues abound.  For example, in the past year, we have been advising on the implications of an ongoing dispute between the City Attorney of Los Angeles and an app operator over geolocation data collection, as well as a settlement between the FTC and a personal email management service over access to “e-receipt” data.  We have entertained multiple questions from clients about the unsettled legal terrain surrounding web scraping and have been closely following developments in this area, including the blockbuster hiQ Ninth Circuit ruling from earlier this year. As usual, the pace of technological innovation has outpaced the ability for the law to keep up.

  • Data security is now regularly a boardroom and courtroom issue, with data breaches, phishing, ransomware attacks and identity theft (and cyberinsurance) the norm. Meanwhile, consumers are experiencing deeper and deeper “breach fatigue” with every breach notice they receive. While the U.S. government has not yet been able to put into place general national data security legislation, states and certain regulators are acting to compel data collectors to take reasonable measures to protect consumer information (e.g., New York’s newly-enacted SHIELD Act) and IoT device manufacturers to equip connected devices with certain security features appropriate to the nature and function of the devices secure (e.g., California’s IoT security law, which becomes effective January 1, 2020). Class actions over data breaches and security lapses are filed regularly, with mixed results.

  • Many organizations have focused on the opportunistic issues associated with new and emerging sources of data. They seek to use “big data” – either sourced externally or generated internally – to advance their operations.  They are focused on understanding the sources of the data and their lawful rights to use such data.  They are examining new revenue opportunities offered by the data, including the expansion of existing lines, the identification of customer trends or the creation of new businesses (including licensing anonymized data to others).

  • Moreover, data was a key asset in many corporate transactions in 2019. Across the board in M&A, private equity, capital markets, finance and some real estate transactions, data was the subject of key deal points, sometimes intensive diligence, and often difficult negotiations. Consumer data has even become a national security issue, as the Committee on Foreign Investment in the United States (CFIUS), expanded under a 2018 law, began to scrutinize more and more technology deals involving foreign investment, including those involving sensitive personal data.

I am not going out on a limb in saying that 2020 and beyond promise many interesting developments in “big data,” privacy and data security.

Social Media under Fire

Social media platforms experienced an interesting year. The power of the medium came into even clearer focus, and not necessarily in the most flattering light. In addition to privacy issues, fake news, hate speech, bullying, political interference, revenge porn, defamation and other problems came to light. Executives of the major platforms have been on the hot seat in Washington, and there is clearly bipartisan unease with the influence of social media in our society.  Many believe that the status quo cannot continue. Social media platforms are working to build self-regulatory systems to address these thorny issues, but the work continues.  Still, amidst the bluster and criticism, it remains to be seen whether the calls to “break up” the big tech companies will come to pass or whether Congress’s ongoing debate of comprehensive data privacy reform will lead to legislation that would alter the basic practices of the major technology platforms (and in turn, many of the data collection and sharing done by today’s businesses).  We have been working with clients, advising them of their rights and obligations as platforms, as contributors to platforms, and in a number of other ways in which they may have a connection to such platforms or the content or advertising appearing on such platforms.

What does 2020 hold? Will Washington’s withering criticism of the tech world translate into any tangible legislation or regulatory efforts?  Will Section 230 of the Communications Decency Act – the law that underpins user generated content on social media and generally the availability of user generated content on the internet and apps – be curtailed? Will platforms be asked to accept more responsibility for third party content appearing on their services?

While these issues are playing out in the context of the largest social media platforms, any legislative solutions to these problems could in fact extend to others that do not have the same level of compliance resources available. Unless a legislative solution includes some type of “size of person” test or room to adapt technical safeguards to the nature and scope of a business’s activities or sensitivity of the personal information collected, smaller providers could be shouldered with a difficult and potentially expensive compliance burden. Thus, it remains to see how the focus on social media and any attempt to solve the issues it presents may affect online communications more generally.

Quantum Leaps

Following the momentum of the passage of the National Quantum Initiative at the close of 2018, a significant level of resources has been invested into quantum computing in 2019.  This bubble of activity culminated in Google announcing a major milestone in quantum computing.  Interestingly, IBM suggests that it wasn’t quite as significant as Google claimed.  In any case, the development of quantum computing in the U.S. has progressed a great deal in 2019, and many organizations will continue to focus on it as a priority in 2020.

  • Reports state that China has dedicated billions to build a Chinese national laboratory for quantum computing, among other related R&D products, a development that has gotten the attention of Congress and the Pentagon. This may be the beginning of the 21st century’s great technological race.

  • What is at stake? The implications are huge. It is expected that ultimately, quantum computers will be able to solve complex computations exponentially faster – as much as 100 million times faster — than classic computers. The opportunities this could present are staggering.  As are the risks and dangers.  For example, for all its benefits, the same technology could quickly crack the digital security that protects online banking and shopping and secure online communications.

  • Many organizations are concerned about the advent of quantum computing. But given that it will be a reality in the future, what should you be thinking about now? While not a real threat for 2020 or the near-term thereafter, it would be wise to think about it if one is anticipating investing in long-term infrastructure solutions. Will quantum computing render the investment obsolete? Or, will quantum computing present a security threat to that infrastructure?  It is not too early to think about these issues, and for example, technologists have been hard at work developing quantum-proof blockchain protocols. It would at least be prudent to understand the long-term roadmap of technology suppliers to see if they have even thought about quantum computing, and if so, to see to how they see quantum computing impacting their solutions and services.

Artificial Intelligence

We have seen significant level of deployment in the Artificial Intelligence/Machine Learning landscape this past year.  According to the Artificial Intelligence Index Report 2019, AI adoption by organizations (of at least one function or business unit) is increasing globally. Many businesses across many industries are deploying some level of AI into their businesses.  However, the same report notes that many companies employing AI solutions might not be taking steps to mitigate the risks from AI, beyond cybersecurity. We have advised clients on those risks, and in certain cases have been able to apportion exposure amongst multiple parties involved in the implementation.  In addition, we have also seen the beginning of regulation in AI, such as California’s chatbot law, New York’s recent passage of a law (S.2302prohibiting consumer reporting agencies and lenders from using the credit scores of people in a consumer’s social network to determine that individual’s credit worthiness, or the efforts of a number of regulators to regulate the use of AI in hiring decisions.

We expect 2020 to be a year of increased adoption of AI, coupled with an increasing sense of apprehension about the technology. There is a growing concern that AI and related technologies will continue to be “weaponized” in the coming year, as the public and the government express concern over “deepfakes” (including the use of voice deepfakes of CEOs to commit fraud).  And, of course, the warnings of people like Elon Musk and Bill Gates, as they discuss AI, cannot be ignored.

Blockchain

We have been very busy in 2019 helping clients learn about blockchain technologies, including issues related to smart contracts and cryptocurrency. 2019 was largely characterized by pilotstrials,  tests and other limited applications of blockchain in enterprise and infrastructure applications as well as a significant level of activity in tokenization of assetscryptocurrency investments, and the building of businesses related to the trading and custody of digital assets. Our blog, www.blockchainandthelaw.io keeps readers abreast of key new developments and we hope our readers have found our published articles on blockchain and smart contracts helpful.

Looking ahead to 2020, regulators such as the SECFinCENIRS and CFTC are still watching the cryptocurrency space closely. Gone are the days of ill-fated “initial coin offerings” and today, security token offerings, made in compliance with the securities laws, are increasingly common. Regulators are beginning to be more receptive to cryptocurrency, as exemplified by the New York State Department of Financial Services revisiting of the oft-maligned “bitlicense” requirement in New York.

Beyond virtual currency, I believe some of the most exciting developments of blockchain solutions in 2020 will be in supply chain management and other infrastructure uses of blockchain. 2019 was characterized by experimentation and trial. We have seen many successes and some slower starts. In 2020, we expect to see an increase in adoption. Of course, the challenge for businesses is to really understand whether blockchain is an appropriate solution for the particular need. Contrary to some of the hype out there, blockchain is not the right fit for every technology need, and there are many circumstances where a traditional client-server model is the preferred approach. For help in evaluating whether blockchain is in fact a potential fit for a technology need, this article may be helpful.

Other 2020 Developments

Interestingly, one of the companies that has served as a form of leading indicator in the adoption of emerging technologies is Walmart.  Walmart was one of the first major companies to embrace supply use of blockchain, so what is Walmart looking at for 2020? A recent Wall Street Journal article discusses its interest and investment in 5G communications and edge computing. We too have been assisting clients in those areas, and expect them to be active areas of activity in 2020.

Edge computing, which is related to “fog” computing, which is, in turn,  related to cloud computing, is simply put, the idea of storing and processing information at the point of capture, rather than communicating that information to the cloud or a central data processing location for storage and processing. According to the WSJ article, Walmart plans on building edge computing capability for other businesses to hire (following to some degree Amazon’s model for AWS).  The article also talks about Walmart’s interest in 5G technology, which would work hand-in-hand with its edge computing network.

Our experience with clients suggest that Walmart may be onto something.  Edge and fog computing, 5G and the growth of the “Internet of Things” are converging and will offer the ability for businesses to be faster, cheaper and more profitable. Of course this convergence also will tie back to the issues we discussed earlier, such as data, privacy and data security, artificial intelligence and machine learning. In general, this convergence will increase even more the technical abilities to process and use data (which would conceivably require regulation that would feature privacy and data security protections that are consumer-friendly, yet balanced so they do not stifle the economic and technological benefits of 5G).

This past year has presented a host of fascinating technology-based legal issues, and 2020 promises to hold more of the same.  We will continue to keep you posted!

We hope you had a good 2019, and we want to wish all of our readers a very happy and safe holiday season and a great New Year!


© 2019 Proskauer Rose LLP.

For more in technology developments, see the National Law Review Intellectual Property or Communications, Media & Internet law sections.

California DMV Exposes 3,200 SSNs of Drivers

The California Department of Motor Vehicles (DMV) announced on November 5, 2019, that it allowed the Social Security numbers (SSNs) of 3,200 California drivers to be accessed by unauthorized individuals in other state and federal agencies, including the Internal Revenue Service, the Small Business Administration and the district attorneys’ offices in Santa Clara and San Diego counties.

According to a news report, the access included the full Social Security numbers of individuals who were being investigated for criminal activity or compliance with tax laws. Apparently, the access also allowed investigators to see which drivers didn’t have Social Security numbers, which has given immigration advocates concern.

The DMV stated that the incident was not a hack, but rather, an error, and the unauthorized access was terminated when it was discovered on August 2, 2019. Nonetheless, the DMV notified the 3,200 drivers of the incident and the exposure of their personal information. The DMV issued a statement that it has “taken additional steps to correct this error, protect this information and reaffirm our serious commitment to protect the privacy rights of all license holders.”

 

Copyright © 2019 Robinson & Cole LLP. All rights reserved.
For more on data security, see the National Law Review Communications, Media & Internet law page.

Ubers of the Future will Monitor Your Vital Signs

Uber has announced that it is considering developing self-driving cars that monitor passengers’ vital signs by asking the passengers how they feel during the ride, in order to provide a stress-free and satisfying trip. This concept was outlined in a patent filed by the company in July 2019. Uber envisions passengers connecting their own health-monitoring devices (e.g., smart watches, activity trackers, heart monitors, etc.) to the vehicle to measure the passenger’s reactions. The vehicle would then synthesize the information, along with other measurements that are taken by the car itself (e.g., thermometers, vehicle speed sensors, driving logs, infrared cameras, microphones, etc.). This type of biometric monitoring could potentially allow the vehicle to assess whether it might be going too fast, getting too close to another vehicle on the road, or applying the brakes too hard.  The goal is to use artificial intelligence to create a more ‘satisfying’ experience for the riders in the autonomous vehicle.

This proposed technology presents yet another way that ride-sharing companies such as Uber can collect more data from their passengers. Of course, passengers would have the choice about whether to use this feature, but this is another consideration for passengers in this data-driven industry.


Copyright © 2019 Robinson & Cole LLP. All rights reserved.

For more about self-driving cars, see the National Law Review Communications, Media & Internet law page.

Be Thankful I Don’t Take It All – France Moves to Tax the Value of Data

Were the Beatles still recording today, they might have to add this verse to Taxman. As what will surely be the opening salvo in government efforts to find ways to recapture the value of the personal data upon which so much of our digital economy now seems to depend and return it to consumers, France is now set to become the first European country to implement what is effectively a “data tax”.

About 30 companies, mostly from the US, may soon have to pay a 3% tax on their revenues. The tax will mostly affect companies that use customer data to sell online advertising. Justifying the new tax, French Finance Minister Bruno Le Maire clearly drew the battle lines:

This is about justice . . . . These digital giants use our personal data, make huge profits out of these data . . . then transfer the money somewhere else without paying their fair share of taxes.

The bill would apply to digital companies with worldwide revenues over 750 million euros ($848 million), including French revenue over 25 million euros. Not surprisingly, Google, Amazon and Facebook are squarely in the crosshairs of the new tax.

According to European Commission figures, the FANG companies and their ilk pay on average 14 percentage points less tax than other European companies. France took unilateral action after a similar proposal at the EU level failed to get unanimous support from member states, although Le Maire said he would now push for an international deal by the end of the year.

Lest you think this is just a European phenomenon, you need only look west to California, where Governor Newsom has commissioned the study of “data dividends” to help address the digital divide. In fact, the much-discussed California Consumer Privacy Act already contains provisions encouraging digital companies to compensate consumers for the use of their personal data. See our recent alert on data dividends and the CCPA here.

There will be lots more action in the “value for data” space in coming days. While academics debate whether data is more like labor or more like capital, we expect state and federal regulators to look to the value of data as a means to address the challenges of artificial intelligence and income inequality.

 

Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.
Read more international news on the NLR’s Global Type of Law Page.

Were Analytics the Real MVP of the Super Bowl?

As the Eagles readied to celebrate the franchise’s first Vince Lombardi trophy, an unlikely candidate basked in the glow of being declared the game’s Most Valuable Player. Surely it was Nick Foles who, on his way to upsetting one of the NFL’s elite franchises threw and caught a touchdown in the same big game, was the true MVP. But was he?

In the days leading up to the Super Bowl, the New York Times published an article about how the Eagles leveraged analytics to secure a Super Bowl berth. The team relied, in part, on probabilistic models that leveraged years of play data to calculate likely outcomes, given a specific set of circumstances. They found that while enumerating outcomes and optimizing for success, the models would, in many cases, recommend plays that bucked the common wisdom. Indeed, we saw the Eagles run plays and make decisions throughout the season that, to the outside observer, may have seemed mind-boggling, overly-aggressive, or risky. Of course, the outside observer did not have access to the play-by-play analytics. Yet, in many instances, these data-driven decisions produced favorable results. So it seems that analytics were the real MVP, right? Well, not entirely.

As we have written in the past, the most effective analytics platforms provide guidance and should never be solely relied upon by employers when making decisions. This analytics concept rings as true in football as it does in business. The New York Times article talks about how mathematical models can serve to defend a playmaking decision that defies traditional football logic. For example, why would any team go for it on fourth and one, deep in their own zone, during their first possession in overtime? What if the analytics suggested going for it was more likely to result in success? If it fails, well, the football pundits will have a lot to talk about.

Coaches and players weigh the analytics, examine the play conditions, and gauge on-field personnel’s ability to perform. In order words, the team uses analytics as a guide and, taking into account other “soft” variables and experience, makes a decision that is right for the team at that time. This same strategy leads to success in the business world. Modern companies hold a wealth of data that can be used to inform decisions with cutting edge analytics, but data-driven insights must be balanced with current business conditions in order to contribute to success. If this balancing act works on the grand stage of professional football, it can work for your organization.

Indeed, we may soon see a day when football stars raise the Super Bowl MVP trophy locked arm-in-arm with their data science team. Until then, congratulations, Mr. Foles.

 

Jackson Lewis P.C. © 2018
This post was written by Eric J. Felsberg of Jackson Lewis P.C.