Fitness App Agrees to Pay $56 Million to Settle Class Action Alleging Dark Pattern Practices

On February 14, 2022, Noom Inc., a popular weight loss and fitness app, agreed to pay $56 million, and provide an additional $6 million in subscription credits to settle a putative class action in New York federal court. The class is seeking conditional certification and has urged the court to preliminarily approve the settlement.

The suit was filed in May 2020 when a group of Noom users alleged that Noom “actively misrepresents and/or fails to accurately disclose the true characteristics of its trial period, its automatic enrollment policy, and the actual steps customer need to follow in attempting to cancel a 14-day trial and avoid automatic enrollment.” More specifically, users alleged that Noom engaged in an unlawful auto-renewal subscription business model by luring customers in with the opportunity to “try” its programs, then imposing significant barriers to the cancellation process (e.g., only allowing customers to cancel their subscriptions through their virtual coach), resulting in the customers paying a nonrefundable advance lump-sum payment for up to eight (8) months at a time. According to the proposed settlement, Noom will have to substantially enhance its auto-renewal disclosures, as well as require customers to take a separate action (e.g., check box or digital signature) to accept auto-renewal, and provide customers a button on the customer’s account page for easier cancellation.

Regulators at the federal and state level have recently made clear their focus on enforcement actions against “dark patterns.” We previously summarized the FTC’s enforcement policy statement from October 2021 warning companies against using dark patterns that trick consumers into subscription services. More recently, several state attorneys general (e.g., in Indiana, Texas, the District of Columbia, and Washington State) made announcements regarding their commitment to ramp up enforcement work on “dark patterns” that are used to ascertain consumers’ location data.

Article By: Privacy and Cybersecurity Practice Group at Hunton Andrews Kurth

Copyright © 2022, Hunton Andrews Kurth LLP. All Rights Reserved.

New Poll Underscores Growing Support for National Data Privacy Legislation

Over half of all Americans would support a federal data privacy law, according to a recent poll from Politico and Morning Consult. The poll found that 56 percent of registered voters would either strongly or somewhat support a proposal to “make it illegal for social media companies to use personal data to recommend content via algorithms.” Democrats were most likely to support the proposal at 62 percent, compared to 54 percent of Republicans and 50 percent of Independents. Still, the numbers may show that bipartisan action is possible.

The poll is indicative of American’s increasing data privacy awareness and concerns. Colorado, Virginia, and California all passed or updated data privacy laws within the last year, and nearly every state is considering similar legislation. Additionally, Congress held several high-profile hearings last year soliciting testimony from several tech industry leaders and whistleblower Frances Haugen. In the private sector, Meta CEO Mark Zuckerberg has come out in favor of a national data privacy standard similar to the EU’s General Data Protection Regulation (GDPR).

Politico and Morning Consult released the poll results days after Senator Ron Wyden (D-OR) accepted a 24,000-signature petition calling for Congress to pass a federal data protection law. Senator Wyden, who recently introduced his own data privacy proposal called the “Mind Your Own Business Act,” said it was “past time” for Congress to act.

He may be right: U.S./EU data flows have been on borrowed time since 2020. The GDPR prohibits data flows from the EU to countries with inadequate data protection laws, including the United States. The U.S. Privacy Shield regulations allowed the United States to circumvent the rule, but an EU court invalidated the agreement in 2020, and data flows between the US and the EU have been in legal limbo ever since. Eventually, Congress and the EU will need to address the situation and a federal data protection law would be a long-term solution.

This post was authored by C. Blair Robinson, legal intern at Robinson+Cole. Blair is not yet admitted to practice law. Click here to read more about the Data Privacy and Cybersecurity practice at Robinson & Cole LLP.

For more data privacy and cybersecurity news, click here to visit the National Law Review.

Copyright © 2022 Robinson & Cole LLP. All rights reserved.

Patch Up – Log4j and How to Avoid a Cybercrime Christmas

A vulnerability so dangerous that Cybersecurity and Infrastructure (CISA) Director Jen Easterly called it “one of the most serious [she’s] seen in [her] entire career, if not the most serious” arrived just in time for the holidays. On December 10, 2021, CISA and the director of cybersecurity at the National Security Agency (NSA) began alerting the public of a critical vulnerability within the Apache Log4j Java logging framework. Civilian government agencies have been instructed to mitigate against the vulnerability by Christmas Eve, and companies should follow suit.

The Log4j vulnerability allows threat actors to remotely execute code both on-premises and within cloud-based application servers, thereby obtaining control of the impacted servers. CISA expects the vulnerability to affect hundreds of millions of devices. This is a widespread critical vulnerability and companies should quickly assess whether, and to what extent, they or their service providers are using Log4j.

Immediate Recommendations

  • Immediately upgrade all versions of Apache Log4j to 2.15.0.
  • Ask your service providers whether their products or environment use Log4j, and if so, whether they have patched to the latest version. Helpfully, CISA sponsors a community-sourced GitHub repository with a list of software related to the vulnerability as a reference guide.
  • Confirm your security operations are monitoring internet-facing systems for indicators of compromise.
  • Review your incident response plan and ensure all response team information is up to date.
  • If your company is involved in an acquisition, discuss the security steps taken within the target company to address the Log4j vulnerability.

The versatility of this vulnerability has already attracted the attention of malicious nation-state actors. For example, government-affiliated cybercriminals in Iran and China have a “wish list” (no holiday pun intended) of entities that they are aggressively targeting with the Log4j vulnerability. Due to this malicious nation-state activity, if your company experiences a ransomware attack related to the Log4j vulnerability, it is particularly important to pay attention to potential sanctions-related issues.

Companies with additional questions about the Log4j vulnerability and its potential impact on technical threats and potential regulatory scrutiny or commercial liability are encouraged to contact counsel.

© 2021 Bracewell LLP

Continuing Effort to Protect National Security Data and Networks

CMMC 2.0 – Simplification and Flexibility of DoD Cybersecurity Requirements

Evolving and increasing threats to U.S. defense data and national security networks have necessitated changes and refinements to U.S. regulatory requirements intended to protect such.

In 2016, the U.S. Department of Defense (DoD) issued a Defense Federal Acquisition Regulation Supplement (DFARs) intended to better protect defense data and networks. In 2017, DoD began issuing a series of memoranda to further enhance protection of defense data and networks via Cybersecurity Maturity Model Certification (CMMC). In December 2019, the Department of State, Directorate of Defense Trade Controls (DDTC) issued long-awaited guidance in part governing the minimum encryption requirements for storage, transport and/or transmission of controlled but unclassified information (CUI) and technical defense information (TDI) otherwise restricted by ITAR.

DFARs initiated the government’s efforts to protect national security data and networks by implementing specific NIST cyber requirements for all DoD contractors with access to CUI, TDI or a DoD network. DFARs was self-compliant in nature.

CMMC provided a broad framework to enhance cybersecurity protection for the Defense Industrial Base (DIB). CMMC proposed a verification program to ensure that NIST-compliant cybersecurity protections were in place to protect CUI and TDI that reside on DoD and DoD contractors’ networks. Unlike DFARs, CMMC initially required certification of compliance by an independent cybersecurity expert.

The DoD has announced an updated cybersecurity framework, referred to as CMMC 2.0. The announcement comes after a months-long internal review of the proposed CMMC framework. It still could take nine to 24 months for the final rule to take shape. But for now, CMMC 2.0 promises to be simpler to understand and easier to comply with.

Three Goals of CMMC 2.0

Broadly, CMMC 2.0 is similar to the earlier-proposed framework. Familiar elements include a tiered model, required assessments, and contractual implementation. But the new framework is intended to facilitate three goals identified by DoD’s internal review.

  • Simplify the CMMC standard and provide additional clarity on cybersecurity regulations, policy, and contracting requirements.
  • Focus on the most advanced cybersecurity standards and third-party assessment requirements for companies supporting the highest priority programs.
  • Increase DoD oversight of professional and ethical standards in the assessment ecosystem.

Key Changes under CMMC 2.0

The most impactful changes of CMMC 2.0 are

  • A reduction from five to three security levels.
  • Reduced requirements for third-party certifications.
  • Allowances for plans of actions and milestones (POA&Ms).

CMMC 2.0 has only three levels of cybersecurity

An innovative feature of CMMC 1.0 had been the five-tiered model that tailored a contractor’s cybersecurity requirements according to the type and sensitivity of the information it would handle. CMMC 2.0 keeps this model, but eliminates the two “transitional” levels in order to reduce the total number of security levels to three. This change also makes it easier to predict which level will apply to a given contractor. At this time, it appears that:

  • Level 1 (Foundational) will apply to federal contract information (FCI) and will be similar to the old first level;
  • Level 2 (Advanced) will apply to controlled unclassified information (CUI) and will mirror NIST SP 800-171 (similar to, but simpler than, the old third level); and
  • Level 3 (Expert) will apply to more sensitive CUI and will be partly based on NIST SP 800-172 (possibly similar to the old fifth level).

Significantly, CMMC 2.0 focuses on cybersecurity practices, eliminating the few so-called “maturity processes” that had baffled many DoD contractors.

CMMC 2.0 relieves many certification requirements

Another feature of CMMC 1.0 had been the requirement that all DoD contractors undergo third-party assessment and certification. CMMC 2.0 is much less ambitious and allows Level 1 contractors — and even a subset of Level 2 contractors — to conduct only an annual self-assessment. It is worth noting that a subset of Level 2 contractors — those having “critical national security information” — will still be required to seek triennial third-party certification.

CMMC 2.0 reinstitutes POA&Ms

An initial objective of CMMC 1.0 had been that — by October 2025 — contractual requirements would be fully implemented by DoD contractors. There was no option for partial compliance. CMMC 2.0 reinstitutes a regime that will be familiar to many, by allowing for submission of Plans of Actions and Milestones (POA&Ms). The DoD still intends to specify a baseline number of non-negotiable requirements. But a remaining subset will be addressable by a POA&M with clearly defined timelines. The announced framework even contemplates waivers “to exclude CMMC requirements from acquisitions for select mission-critical requirements.”

Operational takeaways for the defense industrial base

For many DoD contractors, CMMC 2.0 will not significantly impact their required cybersecurity practices — for FCI, focus on basic cyber hygiene; and for CUI, focus on NIST SP 800-171. But the new CMMC 2.0 framework dramatically reduces the number of DoD contractors that will need third-party assessments. It could also allow contractors to delay full compliance through the use of POA&Ms beyond 2025.

Increased Risk of Enforcement

Regardless of the proposed simplicity and flexibility of CMMC 2.0, DoD contractors need to remain vigilant to meet their respective CMMC 2.0 level cybersecurity obligations.

Immediately preceding the CMMC 2.0 announcement, the U.S. Department of Justice (DOJ) announced a new Civil Cyber-Fraud Initiative on October 6 to combat emerging cyber threats to the security of sensitive information and critical systems. In its announcement, the DOJ advised that it would pursue government contractors who fail to follow required cybersecurity standards.

As Bradley has previously reported in more detail, the DOJ plans to utilize the False Claims Act to pursue cybersecurity-related fraud by government contractors or involving government programs, where entities or individuals, put U.S. information or systems at risk by knowingly:

  • Providing deficient cybersecurity products or services
  • Misrepresenting their cybersecurity practices or protocols, or
  • Violating obligations to monitor and report cybersecurity incidents and breaches.

The DOJ also expressed their intent to work closely on the initiative with other federal agencies, subject matter experts and its law enforcement partners throughout the government.

As a result, while CMMC 2.0 will provide some simplicity and flexibility in implementation and operations, U.S. government contractors need to be mindful of their cybersecurity obligations to avoid new heightened enforcement risks.

© 2021 Bradley Arant Boult Cummings LLP

For more articles about cybersecurity, visit the NLR Cybersecurity, Media & FCC section.

Are Tech Workers Considering Unionizing In The Wake Of COVID-19?

Big tech companies by and large have remained union-free over the years unlike their peers in other industries such as retail and manufacturing. However, earlier this year – and before the COVID-19 pandemic upended workplaces across America – unions scored their first major organizing victory in the tech sector when employees at Kickstarter voted to form a union. According to at least one recent report, more tech company workers may soon be following suit.

The Teamsters, Communications Workers of America, and the Office and Professional Employees International Union all reported an uptick in inquiries from non-union employees about prospects of unionizing the companies they work for, including in the tech and gig economy sectors. One of the reasons cited by these workers was a feeling that not enough is being done to protect employees against the spread of COVID-19, particularly those who work in e-commerce fulfillment centers or drive for ride-sharing apps. There also was concern by employees who were, at least at one point, denied remote work arrangements when they believed their jobs were suited for such an arrangement.

It remains to be seen whether organized labor will be able to augment its numbers based on these workers’ concerns. Several things may complicate any such efforts, including unprecedented layoffs and an almost singular focus by people across the nation on the ongoing pandemic itself.

To the extent unions try to capitalize on the unrest, there are many reasons employers facing organizing attempts should be concerned. For example, one of the most effective tools a company can consider to stave off a unionization attempt are large, all-employee meetings where leaders of the organization communicate directly to the workforce why forming a union isn’t in the company’s or employees’ best interests. In an era where social distancing is a necessity, such meeting – at least in-person – likely won’t be a viable option. In addition, mail-in ballot union elections may become the standard as long as social distancing requirements remain in effect, which are less preferred than live secret-ballot voting booths.

Accordingly, employers desiring to remain union-free should give thought to what talking points, materials, and strategies – as well as communications channels – they have available to them now around this issue. Waiting to do so until after a union petition hits may place them at a significant disadvantage.


© 2020 BARNES & THORNBURG LLP

For more industries impacted by COVID-19, see the National Law Review Coronavirus News section.

Union Launches National Organizing Effort in Gaming and Tech Industries

The Communications Workers of America (CWA) has begun a nationwide union-organizing campaign targeting game and tech industry employees, in partnership with Game Workers Unite! (GWU), a so-called “grass-roots” worker group founded in Southern California in 2018 to spur unionization in the gaming industry. As here, such groups typically are founded and funded by established labor organizations.

The idea for the organizing effort is the result of discussions between the CWA and GWU over the past months. In addition, CWA Canada is partnering with the GWU chapter in Toronto. The CWA has used similar partnerships with other activist groups, most recently teaming up with the Committee for Better Banks to attempt to organize banking sector employees.

Organizing is being spearheaded by Emma Kinema, a co-founder of GWU, and Wes McEnany, a former organizer with the Service Employees International Union and leader of the “Fight for 15” effort. Kinema will lead the organizing on the West Coast, McEnany will focus on the East Coast. Organizers from CWA locals across the country will populate the teams. According to Kinema, the issues on which the union will focus are: “crunch,” or long hours for weeks or months to meet launch deadlines; cyclical layoffs; harassment; misogyny; gender-based pay discrimination; values and ethical issues, such as working with Immigration and Customs Enforcement (ICE); climate change; AI ethics; and pay, severance, and benefits. According to Tom Smith, CWA’s lead organizer, “For a lot of folks, that’s what led them to do this work in the first place, and people are feeling a disconnect between their personal values and what they’re seeing every day in the working lives.”

With the moniker CODE – Campaign to Organize Digital Employees – the ambitious initiative seeks to organize employees across the industry, typically at individual shops or employers. According to Kinema, “We believe workers are strongest when they’re together in one shop in one union, so the disciplines can’t be pitted against each other – none of that’s good for the workers. I think in games and tech, the wall-to-wall industrial model is the best fit.” Smith said the CWA would be open to craft-based organizing – where the focus is industry-wide bargaining units composed of employees performing similar work at different employers – if that is what employees want. In an industry where workers frequently move from employer to employer, portable benefits can be attractive.

An annual survey by the International Game Developers Association, an industry group, found that gaming worker interest in unions had increased to 47 percent by 2019. Indeed, a representation petition is pending at the Brooklyn office of the National Labor Relations Board on behalf of the employees at a gaming company. About 220,000 employees work in the two-billion-dollar gaming industry.

The union has established a website — www.code-cwa.org – as well as a presence on other social media platforms such as Facebook and Twitter.

As most union organizing is based on the presence in the workplace of unresolved employee issues, a comprehensive analysis of such matters may be valuable to employer. Also, supervisors and managers often interact frequently with employees when organizing is afoot or underway. Training regarding their rights and responsibilities under the labor laws often is essential.


Jackson Lewis P.C. © 2020

For more on unionizing news, see the National Law Review Labor & Employment law page.

Offered Free Cyber Services? You May Not Need to Look That Gift Horse in the Mouth Any Longer.

Cyberattacks continue to plague health care entities. In an effort to promote improved cybersecurity and prevent those attacks, HHS has proposed new rules under Stark and the Anti-Kickback Statute (“AKS”) to protect in-kind donations of cybersecurity technology and related services from hospitals to physician groups. There is already an EHR exception1 which protects certain donations of software, information technology and training associated with (and closely related to) an EHR, and HHS is now clarifying that this existing exception has always been available to protect certain cybersecurity software and services. However, the new proposed rule explicitly addresses cybersecurity and is designed to be more permissive then the existing EHR protection.

The proposed exception under Stark and safe harbor under AKS are substantially similar and unless noted, the following analysis applies to both. The proposed rules allow for the donation of cybersecurity technology such as malware prevention and encryption software. The donation of hardware is not currently contemplated, but HHS is soliciting comment on this matter as discussed below. Specifically, the proposed rules also allow for the donation of cybersecurity services that are necessary to implement and maintain cybersecurity of the recipient’s systems. Such services could include:

  • Services associated with developing, installing, and updating cybersecurity software;

  • Cybersecurity training, including breach response, troubleshooting and general “help desk” services;

  • Business continuity and data recovery services;

  • “Cybersecurity as a service” models that rely on a third-party service provider to manage, monitor, or operate cybersecurity of a recipient;

  • Services associated with performing a cybersecurity risk assessment or analysis, vulnerability analysis, or penetration test; or

  • Services associated with sharing information about known cyber threats, and assisting recipients responding to threats or attacks on their systems.

The intent of these rules is to allow the donation of these cybersecurity technology and services in order to encourage its proliferation throughout the health care community, and especially with providers who may not be able to afford to undertake such efforts on their own. Therefore, these rules are expressly intended to be less restrictive than the previous EHR exception and safe harbor. The proposed restrictions are as follows2:

  • The donation must be necessary to implement, maintain, or reestablish cybersecurity;

  • The donor cannot condition the donations on the making of referrals by the recipient, and the making of referrals by the recipient cannot be conditioned on receiving a donation; and

  • The donation arrangement must be documented in writing.

AKS has an additional requirement that the donor must not shift the costs of any technology or services to a Federal health care program. Currently, there are no “deeming provisions” within these proposed rules for the purpose of meeting the necessity requirement, but HHS is considering, and is seeking comment on, whether to add deeming provisions which essentially designate certain arrangements as acceptable. Some in the industry appreciate the safety of knowing what is expressly considered acceptable and others find this approach more restrictive out of fears that the list comes to be considered exhaustive.

HHS is also considering adding a restriction regarding what types of entities are eligible for the donation. Previously for other rules, HHS has distinguished between entities with direct and primary patient care relationships, such as hospitals and physician practices, and suppliers of ancillary services, such as laboratories and device manufacturers.

Additionally, HHS is soliciting comment on whether to allow the donation of cybersecurity hardware to entities for which a risk assessment identifies a risk to the donor’s cybersecurity. Under this potential rule, the recipient must also have a risk assessment stating that the hardware would reasonably address a threat.


1 AKS Safe Harbor 42 CFR §1001.952(y); Stark Exception §411.357(bb)
2 AKS Safe Harbor 42 CFR §1001.952(jj); Stark Exception §411.357(w)(4)


©2020 von Briesen & Roper, s.c

More on cybersecurity software donation regulation on the National Law Review Communications, Media & Internet law page.

Reflections on 2019 in Technology Law, and a Peek into 2020

It is that time of year when we look back to see what tech-law issues took up most of our time this year and look ahead to see what the emerging issues are for 2020.

Data: The Issues of the Year

Data presented a wide variety of challenging legal issues in 2019. Data is solidly entrenched as a key asset in our economy, and as a result, the issues around it demanded a significant level of attention.

  • Clearly, privacy and data security-related data issues were dominant in 2019. The GDPR, CCPA and other privacy regulations garnered much consideration and resources, and with GDPR enforcement ongoing and CCPA enforcement right around the corner, the coming year will be an important one to watch. As data generation and collection technologies continued to evolve, privacy issues evolved as well.  In 2019, we saw many novel issues involving mobilebiometric and connected car  Facial recognition technology generated a fair amount of litigation, and presented concerns regarding the possibility of intrusive governmental surveillance (prompting some municipalities, such as San Francisco, to ban its use by government agencies).

  • Because data has proven to be so valuable, innovators continue to develop new and sometimes controversial technological approaches to collecting data. The legal issues abound.  For example, in the past year, we have been advising on the implications of an ongoing dispute between the City Attorney of Los Angeles and an app operator over geolocation data collection, as well as a settlement between the FTC and a personal email management service over access to “e-receipt” data.  We have entertained multiple questions from clients about the unsettled legal terrain surrounding web scraping and have been closely following developments in this area, including the blockbuster hiQ Ninth Circuit ruling from earlier this year. As usual, the pace of technological innovation has outpaced the ability for the law to keep up.

  • Data security is now regularly a boardroom and courtroom issue, with data breaches, phishing, ransomware attacks and identity theft (and cyberinsurance) the norm. Meanwhile, consumers are experiencing deeper and deeper “breach fatigue” with every breach notice they receive. While the U.S. government has not yet been able to put into place general national data security legislation, states and certain regulators are acting to compel data collectors to take reasonable measures to protect consumer information (e.g., New York’s newly-enacted SHIELD Act) and IoT device manufacturers to equip connected devices with certain security features appropriate to the nature and function of the devices secure (e.g., California’s IoT security law, which becomes effective January 1, 2020). Class actions over data breaches and security lapses are filed regularly, with mixed results.

  • Many organizations have focused on the opportunistic issues associated with new and emerging sources of data. They seek to use “big data” – either sourced externally or generated internally – to advance their operations.  They are focused on understanding the sources of the data and their lawful rights to use such data.  They are examining new revenue opportunities offered by the data, including the expansion of existing lines, the identification of customer trends or the creation of new businesses (including licensing anonymized data to others).

  • Moreover, data was a key asset in many corporate transactions in 2019. Across the board in M&A, private equity, capital markets, finance and some real estate transactions, data was the subject of key deal points, sometimes intensive diligence, and often difficult negotiations. Consumer data has even become a national security issue, as the Committee on Foreign Investment in the United States (CFIUS), expanded under a 2018 law, began to scrutinize more and more technology deals involving foreign investment, including those involving sensitive personal data.

I am not going out on a limb in saying that 2020 and beyond promise many interesting developments in “big data,” privacy and data security.

Social Media under Fire

Social media platforms experienced an interesting year. The power of the medium came into even clearer focus, and not necessarily in the most flattering light. In addition to privacy issues, fake news, hate speech, bullying, political interference, revenge porn, defamation and other problems came to light. Executives of the major platforms have been on the hot seat in Washington, and there is clearly bipartisan unease with the influence of social media in our society.  Many believe that the status quo cannot continue. Social media platforms are working to build self-regulatory systems to address these thorny issues, but the work continues.  Still, amidst the bluster and criticism, it remains to be seen whether the calls to “break up” the big tech companies will come to pass or whether Congress’s ongoing debate of comprehensive data privacy reform will lead to legislation that would alter the basic practices of the major technology platforms (and in turn, many of the data collection and sharing done by today’s businesses).  We have been working with clients, advising them of their rights and obligations as platforms, as contributors to platforms, and in a number of other ways in which they may have a connection to such platforms or the content or advertising appearing on such platforms.

What does 2020 hold? Will Washington’s withering criticism of the tech world translate into any tangible legislation or regulatory efforts?  Will Section 230 of the Communications Decency Act – the law that underpins user generated content on social media and generally the availability of user generated content on the internet and apps – be curtailed? Will platforms be asked to accept more responsibility for third party content appearing on their services?

While these issues are playing out in the context of the largest social media platforms, any legislative solutions to these problems could in fact extend to others that do not have the same level of compliance resources available. Unless a legislative solution includes some type of “size of person” test or room to adapt technical safeguards to the nature and scope of a business’s activities or sensitivity of the personal information collected, smaller providers could be shouldered with a difficult and potentially expensive compliance burden. Thus, it remains to see how the focus on social media and any attempt to solve the issues it presents may affect online communications more generally.

Quantum Leaps

Following the momentum of the passage of the National Quantum Initiative at the close of 2018, a significant level of resources has been invested into quantum computing in 2019.  This bubble of activity culminated in Google announcing a major milestone in quantum computing.  Interestingly, IBM suggests that it wasn’t quite as significant as Google claimed.  In any case, the development of quantum computing in the U.S. has progressed a great deal in 2019, and many organizations will continue to focus on it as a priority in 2020.

  • Reports state that China has dedicated billions to build a Chinese national laboratory for quantum computing, among other related R&D products, a development that has gotten the attention of Congress and the Pentagon. This may be the beginning of the 21st century’s great technological race.

  • What is at stake? The implications are huge. It is expected that ultimately, quantum computers will be able to solve complex computations exponentially faster – as much as 100 million times faster — than classic computers. The opportunities this could present are staggering.  As are the risks and dangers.  For example, for all its benefits, the same technology could quickly crack the digital security that protects online banking and shopping and secure online communications.

  • Many organizations are concerned about the advent of quantum computing. But given that it will be a reality in the future, what should you be thinking about now? While not a real threat for 2020 or the near-term thereafter, it would be wise to think about it if one is anticipating investing in long-term infrastructure solutions. Will quantum computing render the investment obsolete? Or, will quantum computing present a security threat to that infrastructure?  It is not too early to think about these issues, and for example, technologists have been hard at work developing quantum-proof blockchain protocols. It would at least be prudent to understand the long-term roadmap of technology suppliers to see if they have even thought about quantum computing, and if so, to see to how they see quantum computing impacting their solutions and services.

Artificial Intelligence

We have seen significant level of deployment in the Artificial Intelligence/Machine Learning landscape this past year.  According to the Artificial Intelligence Index Report 2019, AI adoption by organizations (of at least one function or business unit) is increasing globally. Many businesses across many industries are deploying some level of AI into their businesses.  However, the same report notes that many companies employing AI solutions might not be taking steps to mitigate the risks from AI, beyond cybersecurity. We have advised clients on those risks, and in certain cases have been able to apportion exposure amongst multiple parties involved in the implementation.  In addition, we have also seen the beginning of regulation in AI, such as California’s chatbot law, New York’s recent passage of a law (S.2302prohibiting consumer reporting agencies and lenders from using the credit scores of people in a consumer’s social network to determine that individual’s credit worthiness, or the efforts of a number of regulators to regulate the use of AI in hiring decisions.

We expect 2020 to be a year of increased adoption of AI, coupled with an increasing sense of apprehension about the technology. There is a growing concern that AI and related technologies will continue to be “weaponized” in the coming year, as the public and the government express concern over “deepfakes” (including the use of voice deepfakes of CEOs to commit fraud).  And, of course, the warnings of people like Elon Musk and Bill Gates, as they discuss AI, cannot be ignored.

Blockchain

We have been very busy in 2019 helping clients learn about blockchain technologies, including issues related to smart contracts and cryptocurrency. 2019 was largely characterized by pilotstrials,  tests and other limited applications of blockchain in enterprise and infrastructure applications as well as a significant level of activity in tokenization of assetscryptocurrency investments, and the building of businesses related to the trading and custody of digital assets. Our blog, www.blockchainandthelaw.io keeps readers abreast of key new developments and we hope our readers have found our published articles on blockchain and smart contracts helpful.

Looking ahead to 2020, regulators such as the SECFinCENIRS and CFTC are still watching the cryptocurrency space closely. Gone are the days of ill-fated “initial coin offerings” and today, security token offerings, made in compliance with the securities laws, are increasingly common. Regulators are beginning to be more receptive to cryptocurrency, as exemplified by the New York State Department of Financial Services revisiting of the oft-maligned “bitlicense” requirement in New York.

Beyond virtual currency, I believe some of the most exciting developments of blockchain solutions in 2020 will be in supply chain management and other infrastructure uses of blockchain. 2019 was characterized by experimentation and trial. We have seen many successes and some slower starts. In 2020, we expect to see an increase in adoption. Of course, the challenge for businesses is to really understand whether blockchain is an appropriate solution for the particular need. Contrary to some of the hype out there, blockchain is not the right fit for every technology need, and there are many circumstances where a traditional client-server model is the preferred approach. For help in evaluating whether blockchain is in fact a potential fit for a technology need, this article may be helpful.

Other 2020 Developments

Interestingly, one of the companies that has served as a form of leading indicator in the adoption of emerging technologies is Walmart.  Walmart was one of the first major companies to embrace supply use of blockchain, so what is Walmart looking at for 2020? A recent Wall Street Journal article discusses its interest and investment in 5G communications and edge computing. We too have been assisting clients in those areas, and expect them to be active areas of activity in 2020.

Edge computing, which is related to “fog” computing, which is, in turn,  related to cloud computing, is simply put, the idea of storing and processing information at the point of capture, rather than communicating that information to the cloud or a central data processing location for storage and processing. According to the WSJ article, Walmart plans on building edge computing capability for other businesses to hire (following to some degree Amazon’s model for AWS).  The article also talks about Walmart’s interest in 5G technology, which would work hand-in-hand with its edge computing network.

Our experience with clients suggest that Walmart may be onto something.  Edge and fog computing, 5G and the growth of the “Internet of Things” are converging and will offer the ability for businesses to be faster, cheaper and more profitable. Of course this convergence also will tie back to the issues we discussed earlier, such as data, privacy and data security, artificial intelligence and machine learning. In general, this convergence will increase even more the technical abilities to process and use data (which would conceivably require regulation that would feature privacy and data security protections that are consumer-friendly, yet balanced so they do not stifle the economic and technological benefits of 5G).

This past year has presented a host of fascinating technology-based legal issues, and 2020 promises to hold more of the same.  We will continue to keep you posted!

We hope you had a good 2019, and we want to wish all of our readers a very happy and safe holiday season and a great New Year!


© 2019 Proskauer Rose LLP.

For more in technology developments, see the National Law Review Intellectual Property or Communications, Media & Internet law sections.

AI and Evidence: Let’s Start to Worry

When researchers at University of Washington pulled together a clip of a faked speech by President Obama using video segments of the President’s earlier speeches run through artificial intelligence, we watched with a queasy feeling. The combination wasn’t perfect – we could still see some seams and stitches showing – but it was good enough to paint a vision of the future. Soon we would not be able to trust our own eyes and ears.

Now the researchers at University of Washington (who clearly seem intent on ruining our society) have developed the next level of AI visual wizardry – fake people good enough to fool real people. As reported recently in Wired Magazine, the professors embarked on a Turing beauty contest, generating thousands of virtual faces that look like they are alive today, but aren’t.

Using some of the same tech that makes deepfake videos, the Husky professors ran a game for their research subjects called Which Face is Real? In it, subjects were shown a real face and a faked face and asked to choose which was real. “On average, players could identify the reals nearly 60 percent of the time on their first try. The bad news: Even with practice, their performance peaked at around 75 percent accuracy.” Wired observes that the tech will only get better at fooling people “and so will chatbot software that can put false words into fake mouths.”

We should be concerned. As with all digital technologies (and maybe most tech of all types if you look at it a certain way) the first industrial applications we have seen occur in the sex industry. The sex industry has lax rules (if they exist at all) and the basest instincts of humanity find enough participants to make a new tech financially viable. Reported by the BBC, “96% of these videos are of female celebrities having their likenesses swapped into sexually explicit videos – without their knowledge or consent.”

Of course, given the level of mendacity that populism drags in its fetid wake, we should expect to see examples of deepfakes offered on television news soon as additional support of the “alternate facts” ginned up by politicians, or generated to smear an otherwise blameless accuser of (faked) horrible behavior.  It is hard to believe that certain corners of the press would be able to resist showing the AI created video.

But, as lawyers, we have an equally valid concern about how this phenomenon plays in court. Clearly, we have rules to authenticate evidence.  New Evidence Rule 902(13) allows authentication of records “generated by an electronic process or system that produces an accurate result” if “shown by the certification of a qualified person” in a particular way. But with the testimony of someone who was wrong, fooled or simply lying about the provenance of an AI generated video, the false digital file can be easily introduced as evidence.

Some Courts under the silent witness theory have allowed a video to speak for itself. Either way, courts will need to tighten up authentication rules in the coming days of cheap and easy deepfakes being present everywhere. As every litigator knows, no matter what a judge tells a jury, once a video is seen and heard, its effects can dominate a juror’s mind.

I imagine that a new field of video veracity expertise will arise, as one side tries to prove its opponent’s evidence was a deepfake, and the opponent works to establish its evidence as “straight video.” One of the problems in this space is not just that deepfakes will slip their way into court, damning the innocent and exonerating the guilty, but that the simple existence of deepfakes allows unscrupulous (or zealously protective) lawyers to cast doubt on real, honest, naturally created video. A significant part of that new field of video veracity experts will be employed to cast shade on real evidence – “We know that deepfakes are easy to make and this is clearly one of them.” While real direct video that goes to the heart of a matter is often conclusive in establishing a crime, it can be successfully challenged, even when its message is true.  Ask John DeLorean.

So I now place a call to the legal technology community.  As the software to make deepfakes continues to improve, please help us develop parallel technology to be able to identify them. Lawyers and litigants need to be able to clearly authenticate genuine video evidence to clearly strike deepfaked video as such.  I am certain that somewhere in Langley, Fort Meade, Tel Aviv, Moscow and/or Shanghai both of these technologies are already mastered and being used, but we in the non-intelligence world may not know about them for a decade. We need some civilian/commercial help in wrangling the truth out of this increasingly complex and frightening technology.


Copyright © 2019 Womble Bond Dickinson (US) LLP All Rights Reserved.

For more artificial intelligence, see the National Law Review Communications, Media & Internet law page.

CMS’s Request for Information Provides Additional Signal That AI Will Revolutionize Healthcare

On October 22, 2019, the Centers for Medicare and Medicaid Services (“CMS”) issued a Request for Information (“RFI”) to obtain input on how CMS can utilize Artificial Intelligence (“AI”) and other new technologies to improve its operations.  CMS’ objectives to leverage AI chiefly include identifying and preventing fraud, waste, and abuse.  The RFI specifically states CMS’ aim “to ensure proper claims payment, reduce provider burden, and overall, conduct program integrity activities in a more efficient manner.”  The RFI follows last month’s White House Summit on Artificial Intelligence in Government, where over 175 government leaders and industry experts gathered to discuss how the Federal government can adopt AI “to achieve its mission and improve services to the American people.”

Advances in AI technologies have made the possibility of automated fraud detection at exponentially greater speed and scale a reality. A 2018 study by consulting firm McKinsey & Company estimated that machine learning could help US health insurance companies reduce fraud, waste, and abuse by $20-30 billion.  Indeed, in 2018 alone, improper payments accounted for roughly $31 billion of Medicare’s net costs. CMS is now looking to AI to prevent improper payments, rather than the current “pay and chase” approach to detection.

CMS currently relies on its records system to detect fraud. Currently, humans remain the predominant detectors of fraud in the CMS system. This has resulted in inefficient detection capabilities, and these traditional fraud detection approaches have been decreasingly successful in light of the changing health care landscape.  This problem is particularly prevalent as CMS transitions to value-based payment arrangements.  In a recent blog post, CMS Administrator, Seema Verma, revealed that reliance on humans to detect fraud resulted in reviews of less than one-percent of medical records associated with items and services billed to Medicare.  This lack of scale and speed arguably allows many improper payments to go undetected.

Fortunately, AI manufacturers and developers have been leveraging AI to detect fraud for some time in various industries. For example, the financial and insurance industries already leverage AI to detect fraudulent patterns. However, leveraging AI technology involves more than simply obtaining the technology. Before AI can be used for fraud detection, the time-consuming process of amassing large quantities of high quality, interoperable data must occur. Further, AI algorithms need to be optimized through iterative human quality reviews. Finally, testing the accuracy of the trained AI is crucial before it can be relied upon in a production system.

In the RFI, CMS poses many questions to AI vendors, healthcare providers and suppliers that likely would be addressed by regulation.  Before the Federal government relies on AI to detect fraud, CMS must gain assurances that AI technologies will not return inaccurate or incorrect outputs that could negatively impact providers and patients. One key question raised involves how to assess the effectiveness of AI technology and how to measure and maintain its accuracy. The answer to this question should factor heavily into the risk calculation of CMS using AI in its fraud detection activities. Interestingly, companies seeking to automate revenue cycle management processes using AI have to grapple with the same concerns.  Without adequate compliance mechanisms in place around the development, implementation and use of AI tools for these purposes, companies could be subject to high risk of legal liability under Federal False Claims Act or similar fraud and abuse laws and regulations.

In addition to fraud detection, the RFI is seeking advice as to whether new technology could help CMS identify “potentially problematic affiliations” in terms of business ownership and registration.  Similarly, CMS is interested to gain feedback on whether AI and machine learning could speed up current expensive and time-consuming Medicare claim review processes and Medicare Advantage audits.

It is likely that this RFI is one of many signals that AI will revolutionize how healthcare is covered and paid for moving forward.  We encourage you to weigh in on this on-going debate to help shape this new world.

Comments are due to CMS by November 20, 2019.


©2019 Epstein Becker & Green, P.C. All rights reserved.

For more CMS activities, see the National Law Review Health Law & Managed Care page.