FCC Adopts Updated Data Breach Notification Rules

On December 13, 2023, the Federal Communications Commission (FCC) voted to update its 16-year old data breach notification rules (the “Rules”). Pursuant to the FCC update, providers of telecommunications, Voice over Internet Protocol (VoIP) and telecommunications relay services (TRS) are now required to notify the FCC of a data breach, in addition to existing obligations to notify affected customers, the FBI and the U.S. Secret Service.

The updated Rules introduce a new customer notification timing requirement, requiring notice of a data breach to affected customers without unreasonable delay after notification to the FCC and law enforcement agencies, and in no case more than 30 days after the reasonable determination of a breach. The new Rules also expand the definition of “breach” to include “inadvertent access, use, or disclosure of customer information, except in those cases where such information is acquired in good faith by an employee or agent of a carrier or TRS provider, and such information is not used improperly or further disclosed.” The updated Rules further introduce a harm threshold, whereby customer notification is not required if a carrier or TRS provider can “reasonably determine that no harm to customers is reasonably likely to occur as a result of the breach,” or where the breach solely involves encrypted data and the encryption key was not affected.

The FCC Approves an NOI to Dive Deeper into AI and its Effects on Robocalls and Robotexts

AI is on the tip of everyone’s tongue it seems these days. The Dame brought you a recap of President Biden’s orders addressing AI at the beginning of the month. This morning at the FCC’s open meeting they were presented with a request for a Notice of Inquiry (NOI) to gather additional information about the benefits and harms of artificial intelligence and its use alongside “robocall and robotext”. The following five areas of interest are as follows:

  • First, the NOI seeks, on whether and if so how the commission should define AI technologies for purposes of the inquiry this includes particular uses of AI technologies that are relevant to the commission’s statutory response abilities under the TCPA, which protects consumers from nonemergency calls and texts using an autodialer or containing an artificial or prerecorded voice.
  • Second, the NOI seeks comment on how technologies may impact consumers who receive robocalls and robotexts including any potential benefits and risks that the emerging technologies may create. Specifically, the NOI seeks information on how these technologies may alter the functioning of the existing regulatory framework so that the commission may formulate policies that benefit consumers by ensuring they continue to receive privacy protections under the TCPA.
  • Third, the NOI seeks comment on whether it is necessary or possible to determine at this point whether future types of AI technologies may fall within the TCPA’s existing prohibitions on autodial calls or texts and artificial or prerecorded voice messages.
  • Fourth, NOI seeks comment on whether the commission should consider ways to verify the authenticity and legitimately generate AI voice or text content from trusted sources such as through the use of watermarks, certificates, labels, signatures, or other forms of labels when callers rely on AI technology to generate content. This may include, for example, emulating a human voice on a robocall or creating content in a text message.
  • Lastly, seeks comment on what next steps the commission should consider to further the inquiry.

While all the commissioners voted to approve the NOI they did share a few insightful comments. Commissioner Carr stated “ If AI can combat illegal robocalls, I’m all for it” but he also expressed that he does “…worry that the path we are heading down is going to be overly prescriptive” and suggests “…Let’s put some common-sense guardrails in place, but let’s not be so prescriptive and so heavy-handed on the front end that we end up benefiting large incumbents in the space because they can deal with the regulatory frameworks and stifling the smaller innovation to come.”

Commissioner Starks shared “I, for one, believe this intersectionality is clinical because the future of AI remains uncertain, one thing is clear — it has the potential to impact if not transform every aspect of American life, and because of that potential, each part of our government bears responsibility to better understand the risks, opportunities within its mandate, while being mindful of the limits of its expertise, experience, and authority. In this era of rapid technological change, we must collaborate, lean into our expertise across agencies to best serve our citizens and consumers.” Commissioner Starks seemed to be particularly focused on AI’s ability to facilitate bad actors in schemes like voice cloning and how the FCC can implement safeguards against this type of behavior.

“AI technologies can bring new challenges and opportunities. responsible and ethical implementation of AI technologies is crucial to strike a balance, ensuring that the benefits of AI are harnessed to protect consumers from harm rather than amplifying the risks in increasing the digital landscape” Commissioner Gomez shared.

Finally, the topic around the AI NOI wrapped up with Chairwoman Rosenworcel commenting “… I think we make a mistake if we only focus on the potential for harm. We needed to equally focus on how artificial intelligence can radically improve the tools we have today to block unwanted robocalls and robotexts. We are talking about technology that can see patterns in our network traffic, unlike anything we have today. They can lead to the development of analytic tools that are exponentially better at finding fraud before it reaches us at home. Used at scale, we cannot only stop this junk, we can use it to increase trust in our networks. We are asking how artificial intelligence is being used right now to recognize patterns in network traffic and how it can be used in the future. We know the risks this technology involves but we also want to harness the benefits.”

40 Countries Including US Vow Not to Pay Ransomware

The United States joined 39 other countries this week in the International Counter Ransomware Initiative, an effort to stem the flow of ransom payments to cybercriminals. The initiative aims to eliminate criminals’ funding through better information sharing about ransom payment accounts. Member states will develop two information-sharing platforms, one created by Lithuania and another jointly by Israel and the United Arab Emirates. Members of the initiative will share a “black list” through the U.S. Department of Treasury, including information on digital wallets being used to move ransomware payments. Finally (in an interesting coming together of the last two oversized ticket items in technology), the initiative will utilize AI to analyze cryptocurrency blockchains to identify criminal transactions.

While government officials near-unanimously counsel against paying ransoms, organizations caught in a ransomware attack often pay to avoid embarrassment and to lower the cost of incident response and mitigation. However, in the macro, paying ransoms leads to ballooning ransom demands and escalating ransomware activity. This initiative may address these long-term trends.

Blair Robinson (Law Clerk – Not yet admitted to practice) authored this article.

Cybersecurity Awareness Dos and Donts Refresher

As we have adjusted to a combination of hybrid, in-person and remote work conditions, bad actors continue to exploit the vulnerabilities associated with our work and home environments. Below are a few tips to help employers and employees address the security threats and challenges of our new normal:

  • Monitoring and awareness of cybersecurity threats as well as risk mitigation;
  • Use of secure Wi-Fi networks, strong passwords, secure VPNs, network infrastructure devices and other remote working devices;
  • Use of company-issued or approved laptops and sandboxed virtual systems instead of personal computers and accounts, as well as careful handling of sensitive and confidential materials; and
  • Preparing to handle security incidents while remote.

Be on the lookout for phishing and other hacking attempts.

Be on high alert for cybersecurity attacks, as cybercriminals are always searching for security vulnerabilities to exploit. A malicious hacker could target employees working remotely by creating a fake coronavirus notice, phony request for charitable contributions or even go so far as impersonating someone from the company’s Information Technology (IT) department. Employers should educate employees on the red flags of phishing emails and continuously remind employees to remain vigilant of potential scams, exercise caution when handling emails and report any suspicious communications.

Maintain a secure Wi-Fi connection.

Information transmitted over public and unsecured networks (such as a free café, store or building Wi-Fi) can be viewed or accessed by others. Employers should configure VPN for telework and enable multi-factor authentication for remote access. To increase security at home, employers should advise employees to take additional precautions, such as using secure Wi-Fi settings and changing default Wi-Fi passwords.

Change and create strong passwords.

Passwords that use pet or children names, birthdays or any other information that can be found on social media can be easily guessed by hackers. Employers should require account and device passwords to be sufficiently long and complex and include capital and lower case letters, numbers and special characters. As an additional precaution, employees should consider changing their passwords before transitioning to remote work.

Update and secure devices. 

To reduce system flaws and vulnerabilities, employers should regularly update VPNs, network infrastructure devices and devices being used to for remote work environments, as well as advise employees to promptly accept updates to operating systems, software and applications on personal devices. When feasible, employers should consider implementing additional safeguards, such as keystroke encryption and mobile-device-management (MDM) on employee personal devices.

Use of personal devices and deletion of electronic files.

Home computers may not have deployed critical security updates, may not be password protected and may not have an encrypted hard drive. To the extent possible, employers should urge employees to use company-issued laptops or sandboxed virtual systems. Where this is not possible, employees should use secure personal computers and employers should advise employees to create a separate user account on personal computers designated for work purposes and to empty trash or recycle bins and download folders.

Prohibit use of personal email for work purposes.

To avoid unauthorized access, personal email accounts should not be used for work purposes. Employers should remind employees to avoid forwarding work emails to personal accounts and to promptly delete emails in personal accounts as they may contain sensitive information.

Secure collaboration tools.

Employees and teams working from home need to stay connected and often rely on instant-messaging and web-conferencing tools (e.g., Slack and Zoom). Employers should ensure company-provided collaboration tools, if any, are secure and should restrict employees from downloading any non-company approved tools. If new collaboration tools are required, IT personnel should review the settings of such tools (as they may not be secure or may record conversations by default), and employers should consider training employees on appropriate use of such tools.

Handle physical documents with care.

Remote work arrangements may require employees to take sensitive or confidential materials offsite that they would not otherwise. Employees should be advised to handle these documents with the appropriate levels of care and avoid printing sensitive or confidential materials on public printers. These documents should be securely shredded or returned to the office for proper disposal.

Develop clear guidelines and train employees on cyberhygiene.

To ensure employees are aware of remote work responsibilities and obligations, employers should prepare clear telework guidelines (and incorporate any standards required by applicable regulatory schemes) and post the guidelines on the organization’s intranet and/or circulate the guidelines to employees via email. A list of key company contacts, including Human Resources and IT security personnel, should be distributed to employees in the event of an actual or suspected security incident.

Prepare for remote activation of incident response and crisis management plans.

Employers should review existing incident response, crisis management and business continuity plans, as well as ensure relevant stakeholders are prepared for remote activation of these plans, such as having hard copies of relevant plans and contact information at home.

DO DON’T
 

  • DO create complex passphrases
  • DO change home Wi-Fi passwords
  • DO create a separate Wi-Fi network for guests
  • DO install anti-malware and anti-virus software for internet-enabled devices
  • DO keep software (including anti-virus/anti-malware software), web browsers, and operating systems up-to-date
  • DO delete files from download folders and trash bins
  • DO immediately report lost or stolen devices
  • DO log off accounts and close windows and browsers on shared devices
  • DO review mobile app settings on shared devices
  • DO handle physical documents with sensitive and/or confidential information in a secure manner

 

 

  • Do NOT use public or unsecure Wi-Fi networks without using VPN
  • Do NOT access or send confidential information over unsecured Wi-Fi networks
  • Do NOT leave electronic or paper documents out in the open
  • Do NOT allow family or friends to use company-provided devices
  • Do NOT leave devices logged-in
  • Do NOT select “remember me” on shared devices
  • Do NOT share passwords with family members
  • Do NOT use names or birthdays in passwords
  • Do NOT save work documents locally on shared devices
  • Do NOT store confidential information on portable storage devices, such as USB or hard drives

 

Navigating the Updated Federal Trade Commission Guidelines for Social Media Influencer Marketing

The Federal Trade Commission (FTC) recently updated its Guides Concerning Use of Endorsements and Testimonials in Advertising (Guidelines). There has not been an update to the Guidelines since 2009, before TikTok even existed and Facebook was still the hip new kid on the block.

Clearly, a lot has changed since then, and being aware of and understanding the updates to these Guidelines is crucial for companies, influencers, brand ambassadors, and marketing professionals who engage in influencer marketing campaigns. The Guidelines take into account the evolving nature of influencer marketing and provide more specific guidance on how influencers can make clear and conspicuous disclosures to their followers. This summary provides a basic overview of the key changes and important points to consider in the wake of the updated Guidelines.

Background:

Anyone who has access to the internet is aware that social media influencer marketing has been a rapidly growing industry over the past decade, and the FTC recognizes the need for adequate transparency concerning this area of marketing to protect consumers from deceptive advertising practices.

The general aim of the updated Guidelines is to ensure consumers can clearly identify when a social media post, blog post, video, or other similar media is sponsored or contains affiliate links. The updated Guidelines seek to develop or make clear guidance concerning specifically: (1) who is considered an endorser; (2) what is considered an “endorsement”; (3) who can be liable for a deceptive endorsement; (4) what is considered “clear and conspicuous” for purposes of disclosure; (5) practices of consumer reviews; and (6) when and how paid or material connections need to be disclosed.

Key Changes and Considerations:

  1. Clear and Conspicuous Disclosure: Influencers must make disclosures clear and conspicuous. This means disclosures should be easily noticed, not buried within a long caption or hidden among a sea of hashtags. The Guidelines require that disclosure be “unavoidable” when posts are made through electronic mediums. The FTC suggests placing disclosures at the beginning of a post, especially on platforms where the full content can be cut off (i.e., Instagram). In broad terms, a disclosure will be deemed “clear and conspicuous” when “it is difficult to miss (i.e. easily noticeable) and easily understandable by ordinary consumers.”
  • Updated Definition of “endorsements”: The FTC has broadened its definition of “endorsements” and what it deems to be deceptive endorsement practices to include fake positive or negative reviews, tags on social media platforms, and virtual (AI) influencers.
  • Use of Hashtags: The Guidelines still hold that commonly used disclosure hashtags such as #ad, #sponsored, and #paidpartnership are acceptable, but those must be displayed in a manner that is easily perceptible by consumers. Influencers should avoid using vague or ambiguous hashtags that may not clearly indicate a paid relationship. Keep in mind, however, whether a specific social media tag counts as an endorsement disclosure is subject to fact-specific review.
  • In-Platform Tools: Social media platforms increasingly provide built-in tools for influencers to mark their posts as sponsored. However, be aware, the Guidelines emphasize that these tools can be helpful in disclosing partnerships, but they are not always sufficient to ensure that disclosures are clear and conspicuous. Parties using these tools should carefully evaluate whether they are clearly and conspicuously disclosing material connections.
  • Affiliate Marketing: If an influencer includes affiliate links in their content, they must disclose this relationship. Simply using affiliate links is considered a material connection and requires disclosure. Phrases such as “affiliate link” or “commission earned” can be used to disclose affiliate relationships.
  • Endorsements and Testimonials: The FTC guidelines apply not only to sponsored content, but also to endorsements and testimonials. Influencers must disclose material connections with endorsing products, whether they received compensation or discounted/free products. Beyond financial relationships as described above, influencers will need to disclose non-financial relationships, such as being friends with a brand’s owners or employees.
  • Ongoing Relationships: Disclosures should be made in every post or video if a material connection for benefit exists, even in cases of ongoing or long-term partnerships.
  • Endorsements Directed at Children: The updated Guidelines added a new section specifically addressing advertising which is focused on reaching children. The FTC states that such advertising “may be of special concern because of the character of audience”. While the Guidelines do not offer specific guidance on how to address advertisements intended for children, those who intend to engage in targeting children as the intended audience should pay special attention to the “clear and conspicuous” requirements espoused by the FTC.

Enforcement and Penalties:

The FTC takes non-compliance with these guidelines seriously and can impose significant fines and penalties on brands, marketers, and influencers who fail to make proper disclosures. Significantly, the updated Guidelines make it clear that influencers who fail to make proper disclosures may be personally liable to consumers who are misled by their endorsements. Furthermore, brands and marketers may also be held responsible for ensuring that influencers with whom they have paid relationships adhere to these guidelines.

Conclusion:

Bear in mind, the Guidelines themselves are not the law, but they serve as a vital guide to avoid breaking it. Overall, the updated Guidelines on influencer disclosures emphasize transparency and consumer protection. To stay compliant and maintain consumer trust, it is imperative that all parties involved in influencer marketing familiarize themselves with these Guidelines and ensure that disclosures are clear, conspicuous, and consistently made in every relevant post or video. Furthermore, as this marketing industry continues to develop and evolve, it will be increasingly important to monitor ongoing developments and changes in the FTC guidelines to stay current with best practices.

Cryptocurrency Brings Disruption to Bankruptcy Courts—What Parties Can Expect and the Open Issues Still To Be Resolved (Part Two)

In this second part of our blog exploring the various issues courts need to address in applying the Bankruptcy Code to cryptocurrency, we expand upon our roadmap.  In part one, we addressed whether cryptocurrency constitutes property of the estate, the impacts of cryptocurrency’s fluctuating valuation, issues of perfection, and the effects of cryptocurrency on debtor-in-possession financing.  In this part two, we explore preferential transfers of cryptocurrency, whether self-executing smart contracts would violate the automatic stay, and how confusing regulatory guidelines negatively impact bankruptcy proceedings, including plan feasibility.

Preferential Transfers

Pursuant to section 547(a) of the Bankruptcy Code, a debtor-in-possession (or trustee) can avoid a transfer of the debtor’s property to a creditor made in the 90-days before filing the petition if, among other things, the creditor received more than it would have in a Chapter 7 liquidation proceeding.  Notably, such a transfer can only be avoided if the thing transferred was the debtor’s property.  When cryptocurrency is valued and whether cryptocurrency is considered to be property of the estate can impact preference liability.

Perhaps the first question to arise in cryptocurrency preference litigation is whether the transferred cryptocurrency is property of the estate.  If, as in the Chapter 11 bankruptcy case of Celsius Network LLC and its affiliates, the cryptocurrency withdrawn by the accountholder during the ninety days prior to the bankruptcy is determined to be property of the estate, and not the accountholder’s property, a preferential transfer claim could be asserted.  If, however, the cryptocurrency was property of the accountholder, for instance if it was held in a wallet to which only the accountholder had exclusive rights, no preference liability would attach to the withdrawal of the cryptocurrency.

Assuming that a preferential transfer claim lies, the court must decide how to value the preferential transfer.  Section 550 of the Bankruptcy Code allows a debtor-in-possession to recover “the property transferred, or, if the court so orders, the value of such property.”[1] This gives the debtor-in-possession wide latitude in asserting a preference claim.  For instance, the debtor-in-possession could take the position that the cryptocurrency is a commodity, in which case a claim could be asserted to recover the cryptocurrency itself, which, by the end of the case, may be worth a much more than it was at the time of the transfer, with any gain accruing to the estate’s benefit.[2]  In contrast, the party receiving the transferred cryptocurrency would likely take the position that the cryptocurrency is currency, in which case a claim would be limited to the value of the cryptocurrency at the time of the transfer.[3]

The proper valuation methodology has not to date been definitively addressed by the courts.  Perhaps the closest a court has come to deciding that issue was in Hashfast Techs. LLC v. Lowe,[4] where the trustee claimed that a payment of 3,000 bitcoins to a supplier was a preferential transfer.  The bitcoin was worth approximately $360,000 at the time of the transfer but was worth approximately $1.2 million when the trustee asserted the preferential transfer claim.  The trustee argued that the payment to the supplier was intended to be a transfer of bitcoins and not a payment of $360,000, and that the supplier was required to pay 3,000 bitcoins to the estate, notwithstanding the substantial increase in value (and the resulting windfall to the estate).  Ultimately, the court refused to decide whether bitcoin is either currency or commodities and held that “[i]f and when the [trustee] prevails and avoids the subject transfer of bitcoin to defendant, the court will decide whether, under 11 U.S.C. § 550(a), he may recover the bitcoin (property) transferred or their value, and if the latter, valued as of what date.”[5]

The changing value of cryptocurrency will also impact the question of whether the creditor received more than it would have in a Chapter 7 liquidation proceeding.[6]  While the value of preferential transfers are determined at the time of the transfer,[7] the analysis of whether such transfer made the creditor better off than in a Chapter 7 liquidation is determined at the time of a hypothetical distribution, which means, practically, at the time of the petition.[8]  Therefore, if a customer withdraws cryptocurrency from a platform during the 90-day preference period, and the cryptocurrency experiences a decrease in value during those 90 days, that customer could arguably be liable for a preferential transfer because the withdrawn cryptocurrency was worth more at the time of the transfer than at the time of the petition.

Presently unanswered is whether the safe-harbor provisions provided for in section 546(e) of the Bankruptcy Code shield cryptocurrency transfers from preferential transfer attack.  Pursuant to section 546(e), a debtor-in-possession cannot avoid as a preference a margin payment or settlement payment made to “financial participant . . . in connection with a securities contract . . . commodity contract . . . [or] forward contract . . . that is made before the commencement of the case.” If the court determines that cryptocurrency is a security or commodity, and that the transfers were made in connection with forward or commodities contracts, then section 546(e) may shield those transfers from attack as preferential.

Violations of the Automatic Stay and Smart Contracts

The self-executing nature of smart contracts may raise automatic stay concerns.  The automatic stay arises upon the filing of a bankruptcy petition, and in general, prevents creditors and other parties from continuing their collection efforts against the debtor.[9]  Of relevance to smart contracts, section 362(a)(3) of the Bankruptcy Code states that the stay applies to “any act” to obtain possession of or control of property of the estate.  Very recently, in Chicago v. Fulton, the United Stated Supreme Court held that section 362(a)(3) prevented any “affirmative act that would alter the status quo at the time of the bankruptcy petition.”[10]

Prior to Fulton, a bankruptcy court in Arkansas examined an analogous issue in Hampton v. Yam’s Choice Plus Autos, Inc. (In re Hampton).[11]  In Hampton, the court adjudicated whether a device that automatically locked the debtor out of her car violated the automatic stay when it disabled function of the car’s engine postpetition.  The device relied on a code—if the debtor paid, the creditor sent her a code, which she would then input, and this prevented the device from automatically disabling the car’s starter.  In this instance, the court found a violation of the automatic stay.[12]

Based on current case law, it remains unclear whether a smart contract, operating automatically, would violate the automatic stay.  For example, if a smart contract is based on a DeFi loan, and it automatically executes postpetition to transfer to the lender assets of the estate, a court may find a violation of the automatic stay.

Hampton would suggest that such actions would be a violation—but two issues caution against relying on Hampton as a clear bellwether.  First, Hampton was decided pre-Fulton and it remains unclear whether, and to what extent, the Supreme Court’s holding in Fulton would change the outcome of Hampton. Second, a potentially key factual distinction exists: the device in Hampton required the creditor to give the debtor a code to prevent the disabling of the car, but smart contracts can be programmed to automatically execute postpetition without any further action by the parties.  If a smart contract is found to violate the automatic stay, the next question is whether such a violation is willful, meaning that a court can impose monetary penalties, including potentially punitive damages.[13]

Note that even if a smart contract is found not to violate the automatic stay, it does not mean that a creditor can retain the property.  Section 542 of the Bankruptcy Code requires those in possession of estate property to turnover the property to the estate.  The estate is created at the time of the filing of the petition, and therefore, any smart contract that executes postpetition would theoretically concern estate property and be subject to turnover.  Unfortunately, ambiguities arise even in this statute, as section 542 contains a good-faith exemption to the turnover mandate if the recipient is not aware of bankruptcy filing and transfers the assets.[14]  Thus, the turnover mandate may be difficult to apply to non-debtor parties to smart contracts who program the contract ahead of time with the knowledge that such a contract may execute after a bankruptcy petition but with no actual knowledge of such petition having been filed.

Regulatory Confusion

The regulatory world has no uniform approach to cryptocurrency. Both the Securities and Exchange Commission (SEC) and the Commodities Future Trading Commission (CFTC), perhaps in part spurred by executive pressure, recently advanced heavier regulatory oversight of cryptocurrency.[15]  The two agencies also share jurisdiction; one agency asserting authority to regulate cryptocurrency does not preclude the other from doing so.[16]  Other agencies, such as the Department of the Treasury’s Office of Foreign Assets Control (OFAC) and Financial Crimes Enforcement Network (FinCen), have also asserted the jurisdiction to regulate cryptocurrency.[17]  The result is regulatory confusion for market participants, both because of the sheer number of agencies asserting jurisdiction and the fact that individual agencies can sometimes issue confusing and ill-defined guidelines.

For instance, the SEC applies the Howey test, developed in the 1940s, to determine whether a specific cryptocurrency is a security.[18]  Unfortunately, the SEC has stated that whether a specific cryptocurrency is a security can change overtime, and recently announced even more cryptocurrencies that they believe meet Howey’s definition of a security via their lawsuits with crypto exchanges Binance.US and Coinbase.[19]

The regulatory confusion clouding cryptocurrency has directly impacted bankruptcy proceedings. One recent case study offers a glimpse into that disconcerting influence. In 2022, crypto exchange Voyager Digital Holdings Ltd. filed for Chapter 11 bankruptcy. Another major crypto exchange, Binance.US, entered into an agreement with Voyager to acquire its assets—valued at around $1 billion. The SEC, the New York Department of Financial Services (NYDFS), and the New York Attorney General all filed sale objections in Voyager’s bankruptcy proceedings, arguing that if Voyager’s crypto assets constitute securities, then Binance.US’s rebalancing and redistribution of these assets to its account holders would be an “unregistered offer, sale or delivery after sale of securities” in violation of Section 5 of the Securities Act.[20]  The NYDFS also alleged that the agreement “unfairly discriminates” against New York citizens by subordinating their recovery of diminished assets in favor of Voyager’s creditors—as well as foreclosing the option to recover crypto rather than liquidated assets.[21]

SEC trial counsel noted that, “regulatory actions, whether involving Voyager, Binance.US or both, could render the transactions in the plan impossible to consummate, thus making the plan unfeasible.”[22]  In April 2023, Binance.US sent Voyager a legal notice canceling the prospective transaction, writing that “the hostile and uncertain regulatory climate in the United States has introduced an unpredictable operating environment impacting the entire American business community.”[23]

The SEC’s desire towards regulating cryptocurrency as securities appears to be growing.  On August 15, 2023, the SEC settled for $24 million its claims against Bittrex, which included violations of Section 5 of the Securities Act.[24] Upon the settlement, the director of the SEC stated that Bittrex “worked with token issuers . . . in an effort to evade the federal securities law.  They failed.”[25]  Uncertainty combined with aggressive enforcement leaves cryptocurrency entities in an uncertain and precarious position.

Plan Feasibility

The Voyager case also highlights issues with plan feasibility in Chapter 11.  In Voyager, the SEC objected to plan feasibility on the basis that one known digital asset of Voyager was a security, and therefore, the purchaser should register as a securities dealer.[26]  Although the court overruled the SEC’s objection, as noted above, Binance.US ultimately withdrew its purchase offer, placing blame on the overall regulatory climate.[27]  As regulations remain uncertain, and government authorities have shown a willingness to assert themselves into the process of reorganization, debtors who file for bankruptcy will have to brace for new or unforeseen objections to an otherwise confirmable plan.

Conclusion

Cryptocurrency has been seen by some as a disruptive force in finance.  As the above issues show, it also appears to be a disruptive force in bankruptcy cases.  Debtors and creditors alike will have to weather the disruption as best they can while the courts continue to grapple with the many open issues raised by cryptocurrencies.

See Cryptocurrency Brings Disruption to Bankruptcy Courts—What Parties Can Expect and the Open Issues Still To Be Resolved (Part One)


[1] See 11 U.S.C. § 550(a).

[2] This position would arguably be consistent with cases interpreting section 550(a) of the Bankruptcy Code that have held that the estate is entitled to recover the value of the property when value has appreciated subsequent to the transfer.  See, e.g., In re Am. Way Serv. Corp., 229 B.R. 496, 531 (Bankr. S.D. Fla. 1999) (noting that when the value of the transferred property has appreciated, “the trustee is entitled to recover the property itself, or the value of the property at the time of judgment.”).

[3] Mary E. Magginis, Money for Nothing: The Treatment of Bitcoin in Section 550 Recovery Actions, 20 U. Pa. J. Bus. L. 485, 516 (2017).

[4] No. 14-30725DM (Bankr. N.D. Cal. Feb. 22, 2016),

[5] Order on Motion for Partial Summary Judgment at 1-2, Hashfast Techs. LLC v. Lowe, Adv. No. 15-3011DM (Bankr. N.D. Cal. 2016) (ECF No. 49).

[6] See 11 U.S.C. § 547(b)(5) (requiring the transferee to have received more that it would have received in a Chapter 7 liquidation).

[7] Maginnis, supra note 3.

[8] See In re CIS Corp., 195 B.R. 251, 262 (Bankr. S.D.N.Y. 1996) (“Thus, the Code § 547(b)(5) analysis is to be made as of the time the Debtor filed its bankruptcy petition); Sloan v. Zions First Nat’l Bank (In re Casteltons, Inc.), 990 F.2d 551, 554 (9th Cir. 1993) (“When assessing an alleged preferential transfer, the relevant inquiry . . . [is] . . . the actual effect of the payment as determined when bankruptcy results.”).

[9] 11 U.S.C. § 362(a).

[10] 141 S.Ct. 585, 590 (2021).

[11] 319 B.R. 163 (Bankr. E.D. Ark. 2005).

[12] Hampton, 319 B.R. at 165-170.

[13] See 11 U.S.C. § 362(k) (providing that, subject to a good faith exception “an individual injured by any willful violation of [the automatic stay] shall recover actual damages, including costs and attorneys’ fees, and, in appropriate circumstances, may recover punitive damages.”).

[14] See 11 U.S.C. § 542(c).

[15] David Gura, The White House calls for more regulations as cryptocurrencies grow more popular (Sept. 6, 2022, 6:00 AM), https://www.npr.org/2022/09/16/1123333428/crypto-cryptocurrencies-bitcoin-terra-luna-regulation-digital-currencies.

[16] See, e.g.CFTC v. McDonnell, 287 F. Supp. 3d 222, 228-29 (E.D.N.Y. 2018) (“The jurisdictional authority of CFTC to regulate virtual currencies as commodities does not preclude other agencies from exercising their regulatory power when virtual currencies function differently than derivative commodities.”).

[17] See Treasury Announces Two Enforcements Actions for over $24M and $29M Against Virtual Currency Exchange Bittrex, Inc., (October 11, 2022), https://home.treasury.gov/news/press-releases/jy1006.

[18] See SEC v. W.J. Howey Co., 328 U.S. 293 (1946).

[19] Emily Mason, Coinbase Hit With SEC Suit That Identifies $37 Billion of Crypto Tokens As Securities, (June 6, 2023 5:08 pm), https://www.forbes.com/sites/emilymason/2023/06/06/coinbase-hit-with-sec-suit-that-identifies-37-billion-of-crypto-tokens-as-securities/?sh=3cc4c6d667a9SEC Charges Crypto Asset Trading Platform Bittrex and its Former CEO for Operating an Unregistered Exchange, Broker, and Clearing Agencyhttps://www.sec.gov/news/press-release/2023-78 (last visited July 31, 2023).

[20] Jack Schickler, SEC Objects to Binance.US’ $1B Voyager Deal, Alleging Sale of Unregistered Securities, (last updated Feb. 23, 2023 at 2:32 p.m.), https://www.coindesk.com/policy/2023/02/23/sec-objects-to-binanceus-1b-voyager-deal-alleging-sale-of-unregistered-securities/.

[21] See NYDFS Objection to Plan, In re Voyager Digital Holdings, et al. at 9-10, No. 22-10943 (Bankr. S.D.N.Y. Feb. 22, 2023) [ECF No. 1051].

[22] Kari McMahon, SEC and New York Regulators Push Back on Binance.US’s Acquisition of Voyager, The Block (Feb. 23, 2023), https://www.theblock.co/post/214333/sec-and-new-york-regulators-push-back-on-binance-uss-acquisition-of-voyager.

[23] Yueqi Yang & Steven Church, Binance US Ends $1 Billion Deal to Buy Bankrupt Crypto Firm Voyager, Bloomberg (April 25, 2023), https://www.bloomberg.com/news/articles/2023-04-25/binance-us-terminates-deal-to-buy-bankrupt-crypto-firm-voyager.

[24] See Crypto Asset Trading Platform Bittrex and Former CEO to Settle SEC Charges for Operating an Unregistered Exchange, Broker, and Clearing Agencyhttps://www.sec.gov/news/press-release/2023-150 (last visited Sept. 18, 2023).

[25] Id.

[26] See Objection of the U.S. Securities Exchange Commission to Confirmation at 3 n.5, In re Voyager Digital Holdings, et al., No. 22-10943 (Bankr. S.D.N.Y. Feb. 22, 2023) (ECF No. 1047).

[27] See supra at n. 23.

For more articles on cryptocurrency, visit the NLR communications, media and internet section.

Navigating Data Ownership in the AI Age, Part 1: Types of Big Data and AI-Derived Data

The emergence of big data, artificial intelligence (AI), and the Internet of Things (IoT) has fundamentally transformed our understanding and utilization of data. While the value of big data is beyond dispute, its management introduces intricate legal questions, particularly concerning data ownership, licensing, and the protection of derived data. This article, the first installment in a two-part series, outlines challenges and opportunities presented by AI-processed and IoT-generated data. The second part, to be published Thursday, October 19, will discuss the complexities of the legal frameworks that govern data ownership.

Defining Big Data and Its Legal Implications

Big data serves as a comprehensive term for large, dynamically evolving collections of electronic data that often exceed the capabilities of traditional data management systems. This data is not merely voluminous but also possesses two key attributes with significant legal ramifications. First, big data is a valuable asset that can be leveraged for a multitude of applications, ranging from decoding consumer preferences to forecasting macroeconomic trends and identifying public health patterns. Second, the richness of big data often means it contains sensitive and confidential information, such as proprietary business intelligence and personally identifiable information (PII). As a result, the management and utilization of big data require stringent legal safeguards to ensure both the security and ethical handling of this information.

Legal Frameworks Governing Data Ownership

Navigating the intricate landscape of data ownership necessitates a multi-dimensional understanding that encompasses legal, ethical, and technological considerations. This complexity is further heightened by diverse intellectual property (IP) laws and trade secret statutes, each of which can confer exclusive rights over specific data sets. Additionally, jurisdictional variations in data protection laws, such as the European Union’s General Data Protection Regulation (GDPR) and the United States’ California Consumer Privacy Act (CCPA), introduce another layer of complexity. These laws empower individuals with greater control over their personal data, granting them the right to access, correct, delete, or port their information. However, the concept of “ownership” often varies depending on the jurisdiction and the type of data involved — be it personal or anonymized.

Machine-Generated Data and Ownership

The issue of data ownership extends beyond individual data to include machine-generated data, which introduces its own set of complexities. Whether it’s smart assistants generating data based on human interaction or autonomous vehicles operating independently of human input, ownership often resides with the entity that owns or operates the machine. This is typically defined by terms of service or end-user license agreements (EULAs). Moreover, IP laws, including patents and trade secrets, can also come into play, especially when the data undergoes specialized processing or analysis.

Derived Data and Algorithms

Derived and derivative algorithms refer to computational models or methods that evolve from, adapt, or draw inspiration from pre-existing algorithms. These new algorithms must introduce innovative functionalities, optimizations, or applications to be considered derived or derivative. Under U.S. copyright law, the creator of a derivative work generally holds the copyright for the new elements that did not exist in the original work. However, this does not extend to the foundational algorithm upon which the derivative algorithm is based. The ownership of the original algorithm remains with its initial creator unless explicitly transferred through legal means such as a licensing agreement.

In the field of patent law, derivative algorithms could potentially be patented if they meet the criteria of being new, non-obvious, and useful. However, the patent would only cover the novel aspects of the derivative algorithm, not the foundational algorithm from which it was derived. The original algorithm’s patent holder retains their rights, and any use of the derivative algorithm that employs the original algorithm’s patented aspects would require permission or licensing from the original patent holder.

Derived and derivative algorithms may also be subject to trade secret protection, which safeguards confidential information that provides a competitive advantage to its owner. Unlike patents, trade secrets do not require registration or public disclosure but do necessitate reasonable measures to maintain secrecy. For example, a company may employ non-disclosure agreements, encryption, or physical security measures to protect its proprietary algorithms.

AI-Processed and Derived Data

The advent of AI has ushered in a new era of data analytics, presenting both unique opportunities and challenges in the domain of IP rights. AI’s ability to generate “derived data” or “usage data” has far-reaching implications that intersect with multiple legal frameworks, including copyright, trade secrets, and potentially even patent law. This intersectionality adds a layer of complexity to the issue of data ownership, underscoring the critical need for explicit contractual clarity in licensing agreements and Data Use Agreements (DUAs).

AI-processed and derived data can manifest in various forms, each with unique characteristics. Extracted data refers to data culled from larger datasets for specific analyses. Restructured data has been reformatted or reorganized to facilitate more straightforward analysis. Augmented data is enriched with additional variables or parameters to provide a more comprehensive view. Inferred data involves the creation of new variables or insights based on the analysis of existing data. Lastly, modeled data has been transformed through ML models to predict future outcomes or trends. Importantly, these data types often contain new information or insights not present in the original dataset, thereby adding multiple layers of value and utility.

The benefits of using AI-processed and derived data can be encapsulated in three main points. First, AI algorithms can clean, sort, and enrich data, enhancing its quality. Second, the insights generated by AI can add significant value to the original data, rendering it more useful for various applications. Third, AI-processed data can catalyze new research, innovation, and product development avenues.

Conversely, the challenges in data ownership are multifaceted. First, AI-processed and derived data often involves a complex web of multiple stakeholders, including data providers, AI developers, and end users, which can complicate the determination of ownership rights. Second, the rapidly evolving landscape of AI and data science leads to a lack of clear definitions for terms like “derived data,” thereby introducing potential ambiguities in legal agreements. Third, given the involvement of multiple parties, it becomes imperative to establish clear and consistent definitions and agreements that meticulously outline the rights and responsibilities of each stakeholder.

For more articles on AI, visit the NLR Communications, Media and Internet section.

California’s “Delete Act” Significantly Expands Requirements for Data Brokers

California recently passed a groundbreaking new law aimed at further regulating the data broker industry. California is already one of only three states (along with Oregon and Vermont) that require data brokers—businesses that collect and sell personal information from consumers with whom the business does not have a direct relationship—to meet certain registration requirements.

Under the new law, the regulation of data brokers—including the registration requirements—falls within the purview of the California Privacy Protection Agency (CPPA) and requires data brokers to comply with expanded disclosure and record keeping requirements. Notably, the law also requires the CPPA to make an “accessible deletion mechanism” available to consumers at no cost by January 1, 2026. The tool is intended to act as a single “delete button,” allowing consumers to request the deletion of all of their personal information held by registered data brokers within the state.

Putting it into practiceBusinesses considered “data brokers” should carefully review the new and expanded requirements and develop a compliance plan, as certain aspects of the law (e.g., the enhanced registry requirements) go into effect as soon as January 31, 2024.

For more articles on data brokers, visit the NLR Communications, Media and Internet section.

Chat with Caution: The Growing Data Privacy Compliance and Litigation Risk of Chatbots

In a new wave of privacy litigation, plaintiffs have recently filed dozens of class action lawsuits in state and federal courts, primarily in California, seeking damages for alleged “wiretapping” by companies with public-facing websites. The complaints assert a common theory: that website owners using chatbot functions to engage with customers are violating state wiretapping laws by recording chats and giving service providers access to them, which plaintiffs label “illegal eavesdropping.”

Chatbot wiretapping complaints seek substantial damages from defendants and assert new theories that would dramatically expand the application of state wiretapping laws to customer support functions on business websites.

Although there are compelling reasons why courts should decline to extend wiretapping liability to these contexts, early motions to dismiss have met mixed outcomes. As a result, businesses that use chatbot functions to support customers now face a high-risk litigation environment, with inconsistent court rulings to date, uncertain legal holdings ahead, significant statutory damages exposure, and a rapid uptick in plaintiff activity.

Strict State Wiretapping Laws

Massachusetts and California have some of the most restrictive wiretapping laws in the nation, requiring all parties to consent to a recording, in contrast to the one-party consent required under federal and many state laws. Those two states have been key battlegrounds for plaintiffs attempting to extend state privacy laws to website functions, partly because they provide for significant statutory damages per violation and an award of attorney’s fees.

Other states with wiretapping statutes requiring the consent of all parties include Delaware, Florida, Illinois, Maryland, Montana, Nevada, New Hampshire, Pennsylvania, and Washington. As in Massachusetts and California, litigants in Florida and Pennsylvania have started asserting wiretapping claims based on website functions.

Plaintiffs’ Efforts to Extend State Wiretapping Laws to Chatbot Functions

Chatbot litigation is a product of early favorable rulings in cases targeting other website technologies, refashioned to focus on chat functions. Chatbots allow users to direct inquiries to AI virtual assistants or human customer service representatives. Chatbot functions are often deployed using third-party vendor software, and when chat conversations are recorded, those vendors may be provided access to live recordings or transcripts.

This most recent wave of plaintiffs now claim that recording chat conversations and making them accessible to vendors violates state wiretapping laws, with liability for both the website operator and the vendor. However, there are several reasons why the application of wiretapping laws in this context is inappropriate, and defendants are asserting these legal arguments in early dispositive motion practice with mixed results.

What Businesses Can Do to Address Growing Chatbot Litigation Risk

Despite compelling legal arguments for why these suits should be stopped, businesses with website chat functions should exercise caution to avoid being targeted, as we expect to see chatbot wiretap claims to skyrocket. This litigation risk is present in all two-party consent states, but especially in Massachusetts and California. Companies should beware that they can be targeted in multiple states, even if they do not offer products or services directly to consumers.

In this environment, a review and update of your company’s website for data privacy compliance, including chatbot activities, is advisable to avoid expensive litigation. These measures include:

  • Incorporating clear disclosure language and robust affirmative consent procedures into the website’s chat functions, including specific notification in the function itself that the chatbot is recording and storing communications
  • Expanding website dispute resolution terms, including terms that could reduce the risk of class action litigation and mass arbitration
  • Updating the website’s privacy policy to accurately and clearly explain what data, if any, is recorded, stored, and transmitted to service providers through its chat functions, ideally in a dedicated “chat” section
  • Considering data minimization measures in connection with website chat functions
  • Evaluating third-party software vendors’ compliance history, including due diligence to ensure a complete understanding of how chatbot data is collected, transmitted, stored, and used, and whether the third party’s privacy policies are acceptable

Companies may also want to consider minimizing aspects of their chatbots that have a high annoyance factor – such as blinking “notifications” – to reduce the likelihood of attracting a suit. This list is not comprehensive, and businesses should ensure their legal teams are aware of their website functions and data collection practices.

For more articles on privacy, visit the NLR Communications, Media and Internet section.

Emojis in eDiscovery

Emojis Pose Challenges to Lawyers, Juries & Discovery Specialists

We have all used emojis.  Whether in our text messages or in our IMs, these wordless communications are commonplace.  In fact, by some estimates, more than 10 billion emojis are sent every day in various electronic messaging mediums. With the use of chat and mobile platforms only increasing, what do lawyers and eDiscovery professionals need to know about these marks and how they impact the discovery process and the courtroom?

What is an Emoji?

Emojis are small cartoon images that are interpreted and supported at the discretion of each application developer.  The predecessor to the emoji was the emoticon.

Why Are Emojis Complicated?

Anyone reading eDiscovery content knows that these tiny little carton pictures while often playful and cute, can be a challenge to identify, collect and process.  Part of the challenge is volume driven but part is platform driven.  Specifically, the Unicode Consortium, which is the standards body that allows software to recognize text characters and display them uniformly, acknowledges thousands of different emojis. But that number includes variables of the same image – for example different genders and skin tonality. And while much work has been done to standardize emojis, different systems support different emojis.  For example, while a slice of pizza is likely recognized universally, in reality a slice from the popular Domino’s® franchise looks different from a slice bought at the local brick oven pizza parlor.  Similarly, when dealing in emojis, a slice of pizza viewed on one device will look different than one viewed on a device by a different company.  For those of you who have ever shared a text among different phone operating system users, you have undoubtedly learned this lesson before now.  Indeed, if you ever received the question mark inside the rectangular shaped box – which appears when the recipient’s application does not support the sender’s application – the emoji image is indecipherable.   Complicating this phenomenon is that different instant messaging systems have proprietary emojis and additionally allow users to create their own emojis – none of which are acknowledged by Unicode.org. Add to that the fact that emojis often evolve.  For example, the “pistol” emoji was changed in 2016 by one operating system to a less dangerous version of itself (i.e., a “water pistol” or “toy gun”).  But, when received by a different platform, that water pistol or toy gun emoji might still appear to be a regular “gun” or “pistol” emoji.

Emojis in Litigation

Assuming you have been able to secure during discovery relevant emojis, use during litigation can be paved with surprises.  In fact, once a wordless communication (i.e., an emoji) is admitted into the record, courts and juries will look to the surrounding circumstances to interpret the communication.  And, while this analysis generally includes scrutiny of the accompanying text and whether the emoji alters the meaning of the message, how does one account for platform interpretation issues?  Meaning – what if the water gun I sent from my device is received by another device in a way that reflects a menacing weapon thereby manifesting a different intent to the recipient than what was intended by the sender.  At first glance, the emoji may seem innocuous, such as a simple smile to communicate happiness but taken in the context or community in which the communication is used, the meaning may be interpreted differently by the sender and/or recipient.  Indeed, emojis should not be considered a universal language having universal meaning and, like certain physical actions, the meaning of symbols can vary by community or culture.  Consider for example that the “thumbs up” emoji is considered vulgar in many countries in the Middle East yet typically considered a positive expression in most other countries.[1]

Because the complexities of interpreting the meaning and intent of the emoji in court is exacerbated by competing platforms, focused inquiry on the sender’s and recipient’s intent, surrounding circumstances and accompanying text may be critical. Unfortunately, 1 + 1 does not always equal 2 and things may not be as they may appear merely because of a certain electronically generated animated face.


[1] A few cases involving emojis include Ghanam v. Does (where the Michigan Court of Appeals had to analyze the circumstances surrounding the use of the emoji “sticking out its tongue” within a communication in a defamation case); Commonwealth v. Danzey, (smile face embedded in social media did not immunize claims defendant stalked and harassed victim where wording demonstrated criminal intent); Kryzac v. State, (Tennessee case where “frowning face” emoji used as evidence of relationship between defendant and victim); State v. Disabato, (defendant in Ohio was convicted of telecommunications harassment for sending unwanted text messages, some of which included “rodent” emojis); Commonwealth v. Foster (Pennsylvania defendant on probation for a drug-related conviction raised the suspicion of his probation officer when he posted photographs depicting guns and money along with three “pill” emoji).

For more articles on eDiscovery, visit the NLR Litigation section.