5 Trends to Watch: 2024 Emerging Technology

  1. Increased Adoption of Generative AI and Push to Minimize Algorithmic Biases – Generative AI took center stage in 2023 and popularity of this technology will continue to grow. The importance behind the art of crafting nuanced and effective prompts will heighten, and there will be greater adoption across a wider variety of industries. There should be advancements in algorithms, increasing accessibility through more user-friendly platforms. These can lead to increased focus on minimizing algorithmic biases and the establishment of guardrails governing AI policies. Of course, a keen awareness of the ethical considerations and policy frameworks will help guide generative AI’s responsible use.
  2. Convergence of AR/VR and AI May Result in “AR/VR on steroids” The fusion of Augmented Reality (AR) and Virtual Reality (VR) technologies with AI unlocks a new era of customization and promises enhanced immersive experiences, blurring the lines between the digital and physical worlds. We expect to see further refining and personalizing of AR/VR to redefine gaming, education, and healthcare, along with various industrial applications.
  3. EV/Battery Companies Charge into Greener Future. With new technologies and chemistries, advancements in battery efficiency, energy density, and sustainability can move the adoption of electric vehicles (EVs) to new heights. Decreasing prices for battery metals canbatter help make EVs more competitive with traditional vehicles. AI may providenew opportunities in optimizing EV performance and help solve challenges in battery development, reliability, and safety.
  4. “Rosie the Robot” is Closer than You Think. With advancements in machine learning algorithms, sensor technologies, and integration of AI, the intelligence and adaptability of robotics should continue to grow. Large language models (LLMs) will likely encourage effective human-robot collaboration, and even non-technical users will find it easy to employ robotics to accomplish a task. Robotics is developing into a field where machines can learn, make decisions, and work in unison with people. It is no longer limited to monotonous activities and repetitive tasks.
  5. Unified Defense in Battle Against Cyber-Attacks. Digital threats are expected to only increase in 2024, including more sophisticated AI-powered attacks. As the international battle against hackers wages on, threat detection, response, and mitigation will play a crucial role in staying ahead of rapidly evolving cyber-attacks. As risks to national security and economic growth, there should be increased collaboration between industries and governments to establish standardized cybersecurity frameworks to protect data and privacy.

10 Market Predictions for 2024 from a Healthcare Lawyer

As a healthcare lawyer, 2023 was a pretty unusual year with the sudden entrance of a number of new players into the healthcare marketplace and a rapid retrenchment of others. With innovation showing no signs of slowing down in the year ahead, healthcare providers should consider how to adapt to improve the patient experience, increase their bottom line, and remain competitive in an evolving industry. Here are 10 personal observations of the past year that may help you plan for the year ahead.

  1. Health tech will continue to boom. Without a doubt, in my practice, health tech exploded, and understandably. In the face of tight margins, healthcare technology may offer the promise of immediate returns (think revenue cycle). But it is also important to understand the context. Health tech offers the promise of quick implementation relative to construction of clinical space, and it can be accomplished without additional clinical staff or regulatory oversight, potentially resulting in a prompt return on investment. Advancing technologies and AI will enable real-time, data driven surgical algorithms and patient-specific instruments to improve outcomes in a variety of specialties.
  2. Value-based care is here to stay. Everyone is interested in value-based care. In the past, value-based care was simply aspirational. Now, there are significant attempts to implement it on a sustained basis. It is not a coincidence that there has also been significant turnover in healthcare leadership in the past few years, and that has likely led to more receptivity.
  3. Expansion of value-based care models. There has been considerable activity around advanced primary care and single-condition chronic disease management. We are now starting to see broader efforts to manage care up and down the continuum of care, involving multi-specialty care and the gamut of care locations. Increased pressure to lower costs will result in increased volumes in lower cost, ambulatory settings.
  4. Regulatory scrutiny will continue to increase. For most, this is a given. In 2023, we saw increased scrutiny up and down the continuum, whether related to pharmaceutical costs, regulation of pharmacy benefit managers, healthcare transaction laws, or innovations in thinking around healthcare from the Federal Trade Commission. With the impending election, it is likely healthcare will receive considerable attention and scrutiny.
  5. Private equity (“PE”) will resume the march – with discipline. In my practice, PE entities rethought their growth strategies to focus on how to bring acquisitions to profitability quickly, from a “growth at all costs” mind set. Now there appears to be an increasing focus on operations and an emphasis on making realistic assumptions to underly growth. This has led to a more realistic pricing discipline and investment in management teams with operational experience.
  6. Partnerships. There is an increasing trend towards partnerships between PE entities and health systems. Health systems are under considerable financial stress, and while they do not universally welcome PE with open arms, some systems do appear open to targeted partnerships. By the same token, PE entities are beginning to realize that they require clinical assets that are most readily available at health systems. This will continue in 2024.
  7. The rise of independent physician groups. There is increasing activity among freestanding physician groups. Some doctors are leery of PE because they believe it is solely focused on profits. Similarly, many physicians are reluctant to be employed by health systems because they believe they will simply become a referral source. While we are not likely to see a return to 2002, where many PE and health system physician deals were unwound, we will see increasing growth by independent physician groups.
  8. Continued consolidation. The trend towards consolidation in healthcare is nowhere near ending. To assume risk (the ultimate goal of value-based care), providers require scale, both vertically and horizontally. While segments of healthcare slowed in 2023, a resumption of growth is inevitable.
  9. Increased insolvencies. Most healthcare providers have very high fixed costs and low margins. Small swings in accounts receivable collections, wages, and managed care payments can have a large impact on entities that are just squeezing by.
  10. New entrants. Last year saw several new entrants to the healthcare marketplace nationally. Who in 2023 would have thought Best Buy would enter the healthcare marketplace? There is still plenty of room for new models of care, which we will see in 2024.

2024 promises to be an interesting year in the healthcare industry.

The FCC Approves an NOI to Dive Deeper into AI and its Effects on Robocalls and Robotexts

AI is on the tip of everyone’s tongue it seems these days. The Dame brought you a recap of President Biden’s orders addressing AI at the beginning of the month. This morning at the FCC’s open meeting they were presented with a request for a Notice of Inquiry (NOI) to gather additional information about the benefits and harms of artificial intelligence and its use alongside “robocall and robotext”. The following five areas of interest are as follows:

  • First, the NOI seeks, on whether and if so how the commission should define AI technologies for purposes of the inquiry this includes particular uses of AI technologies that are relevant to the commission’s statutory response abilities under the TCPA, which protects consumers from nonemergency calls and texts using an autodialer or containing an artificial or prerecorded voice.
  • Second, the NOI seeks comment on how technologies may impact consumers who receive robocalls and robotexts including any potential benefits and risks that the emerging technologies may create. Specifically, the NOI seeks information on how these technologies may alter the functioning of the existing regulatory framework so that the commission may formulate policies that benefit consumers by ensuring they continue to receive privacy protections under the TCPA.
  • Third, the NOI seeks comment on whether it is necessary or possible to determine at this point whether future types of AI technologies may fall within the TCPA’s existing prohibitions on autodial calls or texts and artificial or prerecorded voice messages.
  • Fourth, NOI seeks comment on whether the commission should consider ways to verify the authenticity and legitimately generate AI voice or text content from trusted sources such as through the use of watermarks, certificates, labels, signatures, or other forms of labels when callers rely on AI technology to generate content. This may include, for example, emulating a human voice on a robocall or creating content in a text message.
  • Lastly, seeks comment on what next steps the commission should consider to further the inquiry.

While all the commissioners voted to approve the NOI they did share a few insightful comments. Commissioner Carr stated “ If AI can combat illegal robocalls, I’m all for it” but he also expressed that he does “…worry that the path we are heading down is going to be overly prescriptive” and suggests “…Let’s put some common-sense guardrails in place, but let’s not be so prescriptive and so heavy-handed on the front end that we end up benefiting large incumbents in the space because they can deal with the regulatory frameworks and stifling the smaller innovation to come.”

Commissioner Starks shared “I, for one, believe this intersectionality is clinical because the future of AI remains uncertain, one thing is clear — it has the potential to impact if not transform every aspect of American life, and because of that potential, each part of our government bears responsibility to better understand the risks, opportunities within its mandate, while being mindful of the limits of its expertise, experience, and authority. In this era of rapid technological change, we must collaborate, lean into our expertise across agencies to best serve our citizens and consumers.” Commissioner Starks seemed to be particularly focused on AI’s ability to facilitate bad actors in schemes like voice cloning and how the FCC can implement safeguards against this type of behavior.

“AI technologies can bring new challenges and opportunities. responsible and ethical implementation of AI technologies is crucial to strike a balance, ensuring that the benefits of AI are harnessed to protect consumers from harm rather than amplifying the risks in increasing the digital landscape” Commissioner Gomez shared.

Finally, the topic around the AI NOI wrapped up with Chairwoman Rosenworcel commenting “… I think we make a mistake if we only focus on the potential for harm. We needed to equally focus on how artificial intelligence can radically improve the tools we have today to block unwanted robocalls and robotexts. We are talking about technology that can see patterns in our network traffic, unlike anything we have today. They can lead to the development of analytic tools that are exponentially better at finding fraud before it reaches us at home. Used at scale, we cannot only stop this junk, we can use it to increase trust in our networks. We are asking how artificial intelligence is being used right now to recognize patterns in network traffic and how it can be used in the future. We know the risks this technology involves but we also want to harness the benefits.”

Automating Entertainment: Writers Demand that Studios Not Use AI

When the Writers Guild of America (WGA) came with their list of demands in the strike that has already grinded production on many shows to a halt, chief among them was that the studios agree not to use artificial intelligence to write scripts. Specifically, the Guild had two asks: First, they said that “literary material,” including screenplays and outlines, must be generated by a person and not an AI; Second, they insisted that “source material” not be AI-generated.

The Alliance of Motion Picture and Television Producers (AMPTP), which represents the studios, rejected this proposal. They countered that they would be open to holding annual meetings to discuss advancements in technology. Alarm bells sounded as the WGA saw an existential threat to their survival and that Hollywood was already planning for it.

Writers are often paid at a far lower rate to adapt “source material” such as a comic book or a novel into a screenplay than they are paid to generate original literary material. By using AI tools to generate an outline or first draft of an original story and then enlisting a human to “adapt” it into screenplay, production studios potentially stand to save significantly.

Many industries have embraced the workflow of an AI-generated “first draft” that the human then punches up. And the WGA has said that its writers’ using AI as a tool is acceptable: There would essentially be a robot in the writers’ room with writers supplementing their craft with AI-generated copy, but without AI wholly usurping their jobs.

Everyone appears in agreement that AI could never write the next season of White Lotus or Succession, but lower brow shows could easily be AI aped. Law and Order, for instance, is an often cited example. Not just because it’s formulaic but because AIs are trained on massive data sets of copyrighted content and there are 20 seasons of Law and Order for the AI to ingest. And as AI technology gets more advanced who knows what it could do? Chat GPT was initially released last November and as of writing we’re on GPT-4, a far more powerful version of a platform that is advancing exponentially.

The studios’ push for the expanded use of AI is not without its own risks. The Copyright Office has equivocated somewhat in its determination that AI-generated art is not protectable. In a recent Statement of Policy, the Office said that copyright will only protect aspects of the work that were judged to have been made by the authoring human, resulting in partial protections of AI-generated works. So, the better the AI gets—the more it contributes to cutting out the human writer—the weaker the copyright protection for the studios/networks.

Whether or not AI works infringe the copyrights on the original works is an issue that is currently being litigated in a pair of lawsuits against Stability AI, the startup that created Stable Diffusion (an AI tool with the impressive ability to turn text into images in what some have dubbed the most massive art heist in history). Some have questioned whether the humans who wrote the original episodes would get compensated, and the answer is maybe not. In most cases the scripts were likely works for hire, owned by the studios.

If the studios own the underlying scripts, what happens to the original content if the studios take copyrighted content and put it through a machine that turns out uncopyrightable content? Can you DMCA or sue someone who copies that? As of this writing, there are no clear answers to these questions.

There are legal questions and deeper philosophical questions about making art. As the AI improves and humans become more cyborgian, does the art become indistinguishable? Prolific users of Twitter say they think their thoughts in 280 characters. Perhaps our readers can relate to thinking of their time in 6 minute increments, or .1’s of an hour. Further, perhaps our readers can relate to their industry being threatened by automation. According to a recent report from Goldman Sachs, generative artificial intelligence is putting 44% of legal jobs at risk.

© Copyright 2023 Squire Patton Boggs (US) LLP

For more Employment Legal News, click here to visit the National Law Review.

Clop Claims Zero-Day Attacks Against 130 Organizations

Russia-linked ransomware gang Clop has claimed that it has attacked over 130 organizations since late January, using a zero-day vulnerability in the GoAnywhere MFT secure file transfer tool, and was successful in stealing data from those organizations. The vulnerability is CVE-2023-0669, which allows attackers to execute remote code execution.

The manufacturer of GoAnywhere MFT notified customers of the vulnerability on February 1, 2023, and issued a patch for the vulnerability on February 7, 2023.

HC3 issued an alert on February 22, 2023, warning the health care sector about Clop targeting healthcare organizations and recommended:

  • Educate and train staff to reduce the risk of social engineering attacks via email and network access.
  • Assess enterprise risk against all potential vulnerabilities and prioritize implementing the security plan with the necessary budget, staff, and tools.
  • Develop a cybersecurity roadmap that everyone in the healthcare organization understands.

Security professionals are recommending that information technology professionals update machines to the latest GoAnywhere version and “stop exposing port 8000 (the internet location of the GoAnywhere MFT admin panel).”

Copyright © 2023 Robinson & Cole LLP. All rights reserved.

To AI or Not to AI: U.S. Copyright Office Clarifies Options

The U.S. Copyright Office has weighed in with formal guidance on the copyrightability of works whose generation included the use of artificial intelligence (AI) tools. The good news for technology-oriented human creative types: using AI doesn’t automatically disqualify your work from copyright protection. The bad news for independent-minded AI’s: you still don’t qualify for copyright protection in the United States.

On March 16, 2023, the Copyright Office issued a statement of policy (“Policy”) to clarify its practices for examining and registering works that contain material generated by the use of AI and how copyright law’s human authorship requirements will be applied when AI was used. This Policy is not itself legally binding or a guarantee of a particular outcome, but many copyright applicants may breathe a sigh of relief that the Copyright Office has formally embraced AI-assisted human creativity.

The Policy is just the latest step in an ongoing debate over the copyrightability of machine-assisted products of human creativity. Nearly 150 years ago, the Supreme Court ruled at photographs are copyrightable. See Burrow-Giles Lithographic Company v. Sarony, 111 U.S. 53 (1884). The case involved a photographer’s claim against a lithographer for 85,000 unauthorized copies of a photograph of Oscar Wilde. The photo, Sarony’s “Oscar Wilde No. 18,” is shown below:

Sarony’s “Oscar Wilde No. 18"

The argument against copyright protection was that a photograph is “a reproduction, on paper, of the exact features of some natural object or of some person” and is therefore not a product of human creativity. Id. at 56. The Supreme Court disagreed, ruling that there was sufficient human creativity involved in making the photo, including posing the subject, evoking the desired expression, arranging the clothing and setting, and managing the lighting.

In the mid-1960’s, the Copyright Office rejected a musical composition, Push Button Bertha, that was created by a computer, reasoning that it lacked the “traditional elements of authorship” as they were not created by a human.

In 2018, the U.S. Court of Appeals for the Ninth Circuit ruled that Naruto, a crested macaque (represented by a group of friendly humans), lacked standing under the Copyright Act to hold a copyright in the “monkey selfie” case. See Naruto v. Slater, 888 F.3d 418 (9th Cir. 2018). The “monkey selfie” is below:

Monkey Selfie

In February 2022, the Copyright Office rejected a registration (filed by interested humans) for a visual image titled “A Recent Entrance to Paradise,” generated by DABUS, the AI whose claimed fractal-based inventions are the subject of patent applications around the world. DABUS’ image is below:

“A Recent Entrance to Paradise”

Litigation over this rejected application remains pending.

And last month, the Copyright Office ruled that a graphic novel consisting of human-authored text and images generated using the AI tool Midjourney could, as a whole, be copyrighted, but that the images, standing alone, could not. See U.S. Copyright Office, Cancellation Decision re: Zarya of the Dawn (VAu001480196) at 2 (Feb. 21, 2023).

The Copyright Office’s issuing the Policy was necessitated by the rapid and remarkable improvements in generative AI tools over even the past several months. In December 2022, generative AI tool Dall-E generated the following images in response to nothing more than the prompt, “portrait of a musician with a hat in the style of Rembrandt”:

Four portraits generated by AI tool Dall-E from the prompt, "portrait of a musician with a hat in the style of Rembrandt."

If these were human-generated paintings, or even photographs, there is no doubt that they would be copyrightable. But given that all four images were generated in mere seconds, with a single, general prompt from a human user, do they meet the Copyright Office’s criteria for copyrightability? The answer, now, is a clear “no” under the Policy.

However, the Policy opens the door to registering AI-assisted human creativity. The toggle points will be:

“…whether the ‘work’ is basically one of human authorship, with the computer [or other device] merely being an assisting instrument, or whether the traditional elements of authorship in the work (literary, artistic, or musical expression or elements of selection, arrangement, etc.) were actually conceived and executed not by man but by a machine.” 

In the case of works containing AI-generated material, the Office will consider whether the AI contributions are the result of “mechanical reproduction” or instead of an author’s “own original mental conception, to which [the author] gave visible form.” 

The answer will depend on the circumstances, particularly how the AI tool operates and how it was used to create the final work. This will necessarily be a case-by-case inquiry.” 

See Policy (citations omitted).

Machine-produced authorship alone will continue not to be registerable in the United States, but human selection and arrangement of AI-produced content could lead to a different result according to the Policy. The Policy provides select examples to help guide registrants, who are encouraged to study them carefully. The Policy, combined with near future determinations by the Copyright Office, will be critical to watch in terms of increasing likelihood a registration application will be granted as the Copyright Office continues to assess the impacts of new technology on the creative process. AI tools should not all be viewed as the “same” or fungible. The type of AI and how it is used will be specifically considered by the Copyright Office.

In the short term, the Policy provides some practical guidance to applicants on how to describe the role of AI in a new copyright application, as well as how to amend a prior application in that regard if needed. While some may view the Policy as “new” ground for the Copyright Office, it is consistent with the Copyright Office’s long-standing efforts to protect the fruits of human creativity even if the backdrop (AI technologies) may be “new.”

As a closing note, it bears observing that copyright law in the United Kingdom does permit limited copyright protection for computer-generated works – and has done so since 1988. Even under the U.K. law, substantial questions remain; the author of a computer-generated work is considered to be “the person by whom the arrangements necessary for the creation of the work are undertaken.” See Copyright, Designs and Patents Act (1988) §§ 9(3), 12(7) and 178. In the case of images generated by a consumer’s interaction with a generative AI tool, would that be the consumer or the generative AI provider?

Copyright © 2023 Womble Bond Dickinson (US) LLP All Rights Reserved.

Lawyer Bot Short-Circuited by Class Action Alleging Unauthorized Practice of Law

Many of us are wondering how long it will take for ChatGPT, the revolutionary chatbot by OpenAI, to take our jobs. The answer: perhaps, not as soon as we fear!

On March 3, 2023, Chicago law firm Edelson P.C. filed a complaint against DoNotPay, self-described as “the world’s first robot lawyer.” Edelson may have short-circuited the automated barrister’s circuits by filing a lawsuit alleging the unauthorized practice of law.

DoNotPay is marketed as an AI program intended to assist users in need of legal services, but who do not wish to hire a lawyer. The organization was founded in 2015 to assist users in disputing parking tickets. Since then, DoNotPay’s services have expanded significantly. The company’s website offers to help users fight corporations, overcome bureaucratic obstacles, locate cash and “sue anyone.”

In spite of those lofty promises, Edelson’s complaint counters by pointing out certain deficiencies, stating, “[u]nfortunately for its customers, DoNotPay is not actually a robot, a lawyer, or a law firm. DoNotPay does not have a law degree, is not barred in any jurisdiction and is not supervised by any lawyer.”

The suit was brought by plaintiff Jonathan Faridian, who claims to have used DoNotPay for legal drafting projects, demand letters, one small claims court filing and drafting an employment discrimination complaint. Faridian’s complaint explains he was under the impression that he was purchasing legal documents from an attorney, only to later discover that the “substandard” outcomes generated did not comport with his expectations.

When asked for comment, DoNotPay’s representative denied Faridian’s allegations, explaining the organization intends to defend itself “vigorously.”

© 2023 Wilson Elser

The Future of Stablecoins, Crypto Staking and Custody of Digital Assets

In the wake of the collapse of cryptocurrency exchange firm FTX, the Securities and Exchange Commission (SEC) has ratcheted up its oversight and enforcement of crypto firms engaged in activities ranging from crypto staking to custody of digital assets. This is due in part to concerns that the historically free-wheeling and largely unregulated crypto marketplace may adversely impact U.S. investors and contaminate traditional financial systems. The arguments that cryptocurrencies and digital assets should not be viewed as securities under federal laws largely fall on deaf ears at the SEC. Meanwhile, the state of the crypto economy in the United States remains in flux as the SEC, other regulators and politicians alike attempt to balance competing interests of innovation and investment in a relatively novel and untested asset class.

Is Crypto Staking Dead?

First, what is crypto staking? By way of background, it’s necessary to understand a bit about blockchain technology, which serves as the underpinning for all cryptocurrency and digital asset transactions. One of the perceived benefits of such transactions is that they are decentralized and “peer-to-peer” – meaning that Person A can transact directly with Person B without the need for a financial intermediary to approve the transaction.

However, in the absence of a central authority to validate a transaction, blockchain requires other verification processes or consensus mechanisms such as “proof of work” (which in the case of Bitcoin mining ensures that transactions are valid and added to the Bitcoin blockchain correctly) or “proof of stake” (a network of “validators” who contribute or “stake” their own crypto in exchange for a chance to validate a new transaction, update the blockchain and earn a reward). Proof of work has come under fire by environmental activists for the enormous amounts of computer power and energy required to solve complex mathematical or cryptographic puzzles to validate a transaction before it can be recorded on the blockchain. In contrast, proof of stake is analogous to a shareholder voting their shares of stock to approve a corporate transaction.

Second, why has crypto staking caught the attention of the SEC? Many crypto firms and exchanges offer “staking as a service” (SaaS) whereby investors can stake (or lend) their digital assets in exchange for lucrative returns. This practice is akin to a person depositing cash in a bank account in exchange for interest payments – minus FDIC insurance backing of all such bank deposits to protect investors.

Recently, on February 9, 2023, the SEC charged two crypto firms, commonly known as “Kraken,” for violating federal securities laws by offering a lucrative crypto asset SaaS program. Pursuant to this program, investors could stake their digital assets with Kraken in exchange for annual investment returns of up to 21 percent. According to the SEC, this program constituted the unregistered sale of securities in violation of federal securities laws. Moreover, the SEC claims that Kraken failed to adequately disclose the risks associated with its staking program. According to the SEC’s Enforcement Division director:

“Kraken not only offered investors outsized returns untethered to any economic realities but also retained the right to pay them no returns at all. All the while, it provided them zero insight into, among other things, its financial condition and whether it even had the means of paying the marketed returns in the first place.”1

Without admitting or denying the SEC’s allegations, Kraken has agreed to pay a $30 million civil penalty and will no longer offer crypto staking services to U.S. investors. Meanwhile, other crypto firms that offer similar programs, such as Binance and Coinbase, are waiting for the other shoe to drop – including the possibility that the SEC will ban all crypto staking programs for U.S. retail investors. Separate and apart from potentially extinguishing a lucrative revenue stream for crypto firms and investors alike, it may have broader consequences for proof of stake consensus mechanisms commonly used to validate blockchain transactions.

NY DFS Targets Stablecoins

In the world of cryptocurrency, stablecoins are typically considered the most secure and least volatile because they are often pegged 1:1 to some designated fiat (government-backed) currency such as U.S. dollars. In particular, all stablecoins issued by entities regulated by the New York Department of Financial Services (NY DFS) are required to be fully backed 1:1 by cash or cash equivalents. However, on February 13, 2023, NY DFS unexpectedly issued a consumer alert stating that it had ordered Paxos Trust Company (Paxos) to stop minting and issuing a stablecoin known as “BUSD.” BUSD is reportedly the third largest stablecoin by market cap and pegged to the U.S. dollar.

The reasoning behind the NY DFS order remains unclear from the alert, which merely states that “DFS has ordered Paxos to cease minting Paxos-issued BUSD as a result of several unresolved issues related to Paxos’ oversight of its relationship with Binance in regard to Paxos-issued BUSD.”The same day, Paxos confirmed that it would stop issuing BUSD. However, in an effort to assuage investors, Paxos stated “All BUSD tokens issued by Paxos Trust have and always will be backed 1:1 with U.S. dollar–denominated reserves, fully segregated and held in bankruptcy remote accounts.”3

Separately, the SEC reportedly issued a Wells Notice to Paxos on February 12, 2023, indicating that it intended to commence an enforcement action against the company for violating securities laws in connection with the sale of BUSD, which the SEC characterized as unregistered securities. Paxos, meanwhile, categorically denies that BUSD constitute securities, but nonetheless has agreed to stop issuing these tokens in light of the NY DFS order.

It remains to be seen whether the regulatory activity targeting BUSD is the beginning of a broader crackdown on stablecoins amid concerns that, contrary to popular belief, such coins may not be backed by adequate cash reserves.

Custody of Crypto Assets

On February 15, 2023, the SEC proposed changes to the existing “custody rule” under the Investment Advisers Act of 1940. As noted by SEC Chair Gary Gensler, the custody rule was designed to “help ensure that [investment] advisers don’t inappropriately use, lose, or abuse investors’ assets.”The proposed changes to the rule (referred to as the “safeguarding rule”) would require investment advisers to maintain client assets – specifically including crypto assets – in qualified custodial accounts. As the SEC observed, “[although] crypto assets are a relatively recent and emerging type of asset, this is not the first time custodians have had to adapt their practices to safeguard different types of assets.”5

A qualified custodian generally is a federal or state-chartered bank or savings association, certain trust companies, a registered broker-dealer, a registered futures commission merchant or certain foreign financial institutions.6 However, as noted by the SEC, many crypto assets trade on platforms that are not qualified custodians. Accordingly, “this practice would generally result in an adviser with custody of a crypto asset security being in violation of the current custody rule because custody of the crypto asset security would not be maintained by a qualified custodian from the time the crypto asset security was moved to the trading platform through the settlement of the trade.”7

Moreover, in a departure from existing practice, the proposed safeguarding rule would require an investment adviser to enter into a written agreement with the qualified custodian. This custodial agreement would set forth certain minimum protections for the safeguarding of customer assets, including crypto assets, such as:

  • Implementing appropriate measures to safeguard an advisory client’s assets8
  • Indemnifying an advisory client when its negligence, recklessness or willful misconduct results in that client’s loss9
  • Segregating an advisory client’s assets from its proprietary assets10
  • Keeping certain records relating to an advisory client’s assets
  • Providing an advisory client with periodic custodial account statements11
  • Evaluating the effectiveness of its internal controls related to its custodial practices.12

The new proposed, cumbersome requirements for custodians of crypto assets appear to be a direct consequence of the collapse of FTX, which resulted in the inexplicable “disappearance” of billions of dollars of customer funds. By tightening the screws on custodians and investment advisers, the SEC is seeking to protect the everyday retail investor by leveling the playing field in the complex and often murky world of crypto. However, it still remains to be seen whether, and to what extent, the proposed safeguarding rule will emerge after the public comment period, which will remain open for 60 days following publication of the proposal in the Federal Register.


1 SEC Press Release 2023-25 (Feb. 9, 2023).

NY DFS Consumer Alert (Feb. 13, 2023) found at https://www.dfs.ny.gov/consumers/alerts/Paxos_and_Binance.

3 Paxos Press Release (Feb. 13, 2023) found at https://paxos.com/2023/02/13/paxos-will-halt-minting-new-busd-tokens/.

4 SEC Press Release 2023-30 (Feb. 15, 2023).

5 SEC Proposed Rule, p. 79.

6 SEC Fact Sheet: Proposed Safeguarding Rule.

7 SEC Proposed Rule, p. 68.

For instance, per the SEC, this could require storing crypto assets in a “cold wallet.”

9 Per the SEC, “the proposed indemnification requirement would likely operate as a substantial expansion in the protections provided by qualified custodians to advisory clients, in particular because it would result in some custodians holding advisory client assets subject to a simple negligence standard rather than a gross negligence standard.” See SEC Proposed Rule, p. 89.

10 Per the SEC, this requirement is intended to “ensure that client assets are at all times readily identifiable as client property and remain available to the client even if the qualified custodian becomes financially insolvent or if the financial institution’s creditors assert a lien against the qualified custodian’s proprietary assets (or liabilities).” See SEC Proposed Rule, p. 92.

11 Per the SEC, “[in] a change from the current custody rule, the qualified custodian would also now be required to send account statements, at least quarterly, to the investment adviser, which would allow the adviser to more easily perform account reconciliations.” See SEC Proposed Rule, p. 98.

12 Per the SEC, the proposed rule would require that the “qualified custodian, at least annually, will obtain, and provide to the investment adviser a written internal control report that includes an opinion of an independent public accountant as to whether controls have been placed in operation as of a specific date, are suitably designed, and are operating effectively to meet control objectives relating to custodial services (including the safeguarding of the client assets held by that qualified custodian during the year).” See SEC Proposed Rule, p. 101.

© 2023 Wilson Elser

Locking Tik Tok? White House Requires Removal of TikTok App from Federal IT

On February 28, the White House issuedmemorandum giving federal employees 30 days to remove the TikTok application from any government devices. This memo is the result of an act passed by Congress that requires the removal of TikTok from any federal information technology. The act responded to concerns that the Chinese government may use data from TikTok for intelligence gathering on Americans.

I’m Not a Federal Employee — Why Does It Matter?

The White House Memo clearly covers all employees of federal agencies. However, it also covers any information technology used by a contractor who is using federal information technology.  As such, if you are a federal contractor using some sort of computer software or technology that is required by the U.S. government, you must remove TikTok in the next 30 days.

The limited exceptions to the removal mandate require federal government approval. The memo mentions national security interests and activities, law enforcement work, and security research as possible exceptions. However, there is a process to apply for an exception – it is not automatic.

Takeaways

Even if you are not a federal employee or a government contractor, this memo would be a good starting place to look back at your company’s social media policies and cell phone use procedures. Do you want TikTok (or any other social media app) on your devices? Many companies have found themselves in PR trouble due to lapses in enforcement of these types of rules. In addition, excessive use of social media in the workplace has been shown to be a drag on productivity.

© 2023 Bradley Arant Boult Cummings LLP

FTC Launches New Office of Technology

On February 17, 2023, the Federal Trade Commission announced the launch of their new Office of Technology. The Office of Technology will assist the FTC by strengthening and supporting law enforcement investigations and actions, advising and engaging with staff and the Commission on policy and research initiatives, and engaging with the public and relevant experts to identify market trends, emerging technologies and best practices. The Office will have dedicated staff and resources and be headed by Chief Technology Officer Stephanie T. Nguyen.

Article By Hunton Andrews Kurth’s Privacy and Cybersecurity Practice Group

For more privacy and cybersecurity legal news, click here to visit the National Law Review.

Copyright © 2023, Hunton Andrews Kurth LLP. All Rights Reserved.