Patch Up – Log4j and How to Avoid a Cybercrime Christmas

A vulnerability so dangerous that Cybersecurity and Infrastructure (CISA) Director Jen Easterly called it “one of the most serious [she’s] seen in [her] entire career, if not the most serious” arrived just in time for the holidays. On December 10, 2021, CISA and the director of cybersecurity at the National Security Agency (NSA) began alerting the public of a critical vulnerability within the Apache Log4j Java logging framework. Civilian government agencies have been instructed to mitigate against the vulnerability by Christmas Eve, and companies should follow suit.

The Log4j vulnerability allows threat actors to remotely execute code both on-premises and within cloud-based application servers, thereby obtaining control of the impacted servers. CISA expects the vulnerability to affect hundreds of millions of devices. This is a widespread critical vulnerability and companies should quickly assess whether, and to what extent, they or their service providers are using Log4j.

Immediate Recommendations

  • Immediately upgrade all versions of Apache Log4j to 2.15.0.
  • Ask your service providers whether their products or environment use Log4j, and if so, whether they have patched to the latest version. Helpfully, CISA sponsors a community-sourced GitHub repository with a list of software related to the vulnerability as a reference guide.
  • Confirm your security operations are monitoring internet-facing systems for indicators of compromise.
  • Review your incident response plan and ensure all response team information is up to date.
  • If your company is involved in an acquisition, discuss the security steps taken within the target company to address the Log4j vulnerability.

The versatility of this vulnerability has already attracted the attention of malicious nation-state actors. For example, government-affiliated cybercriminals in Iran and China have a “wish list” (no holiday pun intended) of entities that they are aggressively targeting with the Log4j vulnerability. Due to this malicious nation-state activity, if your company experiences a ransomware attack related to the Log4j vulnerability, it is particularly important to pay attention to potential sanctions-related issues.

Companies with additional questions about the Log4j vulnerability and its potential impact on technical threats and potential regulatory scrutiny or commercial liability are encouraged to contact counsel.

© 2021 Bracewell LLP

In the Coming ‘Metaverse’, There May Be Excitement but There Certainly Will Be Legal Issues

The concept of the “metaverse” has garnered much press coverage of late, addressing such topics as the new appetite for metaverse investment opportunities, a recent virtual land boom, or just the promise of it all, where “crypto, gaming and capitalism collide.”  The term “metaverse,” which comes from Neal Stephenson’s 1992 science fiction novel “Snow Crash,” is generally used to refer to the development of virtual reality (VR) and augmented reality (AR) technologies, featuring a mashup of massive multiplayer gaming, virtual worlds, virtual workspaces, and remote education to create a decentralized wonderland and collaborative space. The grand concept is that the metaverse will be the next iteration of the mobile internet and a major part of both digital and real life.

Don’t feel like going out tonight in the real world? Why not stay “in” and catch a show or meet people/avatars/smart bots in the metaverse?

As currently conceived, the metaverse, “Web 3.0,” would feature a synchronous environment giving users a seamless experience across different realms, even if such discrete areas of the virtual world are operated by different developers. It would boast its own economy where users and their avatars interact socially and use digital assets based in both virtual and actual reality, a place where commerce would presumably be heavily based in decentralized finance, DeFi. No single company or platform would operate the metaverse, but rather, it would be administered by many entities in a decentralized manner (presumably on some open source metaverse OS) and work across multiple computing platforms. At the outset, the metaverse would look like a virtual world featuring enhanced experiences interfaced via VR headsets, mobile devices, gaming consoles and haptic gear that makes you “feel” virtual things. Later, the contours of the metaverse would be shaped by user preferences, monetary opportunities and incremental innovations by developers building on what came before.

In short, the vision is that multiple companies, developers and creators will come together to create one metaverse (as opposed to proprietary, closed platforms) and have it evolve into an embodied mobile internet, one that is open and interoperable and would include many facets of life (i.e., work, social interactions, entertainment) in one hybrid space.

In order for the metaverse to become a reality, that is, successfully link current gaming and communications platforms with other new technologies into a massive new online destination – many obstacles will have to be overcome, even beyond the hardware, software and integration issues. The legal issues stand out, front and center. Indeed, the concept of the metaverse presents a law school final exam’s worth of legal questions to sort out.  Meanwhile, we are still trying to resolve the myriad of legal issues presented by “Web 2.0,” the Internet we know it today. Adding the metaverse to the picture will certainly make things even more complicated.

At the heart of it is the question of what legal underpinnings we need for the metaverse infrastructure – an infrastructure that will allow disparate developers and studios, e-commerce marketplaces, platforms and service providers to all coexist within one virtual world.  To make it even more interesting, it is envisioned to be an interoperable, seamless experience for shoppers, gamers, social media users or just curious internet-goers armed with wallets full of crypto to spend and virtual assets to flaunt.  Currently, we have some well-established web platforms that are closed digital communities and some emerging ones that are open, each with varying business models that will have to be adapted, in some way, to the metaverse. Simply put, the greater the immersive experience and features and interactions, the more complex the related legal issues will be.

Contemplating the metaverse, these are just a few of the legal issues that come to mind:

  • Personal Data, Privacy and Cybersecurity – Privacy and data security lawyers are already challenged with addressing the global concerns presented by varying international approaches to privacy and growing threats to data security. If the metaverse fulfills the hype and develops into a 3D web-based hub for our day-to-day lives, the volume of data that will be collected will be exponentially greater than the reams of data already collected, and the threats to that data will expand as well. Questions to consider will include:
    • Data and privacy – What’s collected? How sensitive is it? Who owns or controls it? The sharing of data will be the cornerstone of a seamless, interoperable environment where users and their digital personas and assets will be usable and tradeable across the different arenas of the metaverse.  How will the collection, sharing and use of such data be regulated?  What laws will govern the collection of data across the metaverse? The laws of a particular state?  Applicable federal privacy laws? The GDPR or other international regulations? Will there be a single overarching “privacy policy” governing the metaverse under a user and merchant agreement, or will there be varying policies depending on which realm of the metaverse you are in? Could some developers create a more “privacy-focused” experience or would the personal data of avatars necessarily flow freely in every realm? How will children’s privacy be handled and will there be “roped off,” adults-only spaces that require further authentication to enter? Will the concepts that we talk about today – “personal information” or “personally identifiable information” – carry over to a world where the scope of available information expands exponentially as activities are tracked across the metaverse?
    • Cybersecurity: How will cybersecurity be managed in the metaverse? What requirements will apply with respect to keeping data secure? How will regulation or site policies evolve to address deep fakes, avatar impersonation, trolling, stolen biometric data, digital wallet hacks and all of the other cyberthreats that we already face today and are likely to be exacerbated in the metaverse? What laws will apply and how will the various players collaborate in addressing this issue?
  • Technology Infrastructure: The metaverse will be a robust computing-intensive experience, highlighting the importance of strong contractual agreements concerning cloud computing, IoT, web hosting, and APIs, as well as software licenses and hardware agreements, and technology service agreements with developers, providers and platform operators involved in the metaverse stack. Performance commitments and service levels will take on heightened importance in light of the real-time interactions that users will expect. What is a meaningful remedy for a service level failure when the metaverse (or a part of the metaverse) freezes? A credit or other traditional remedy?  Lawyers and technologists will have to think creatively to find appropriate and practical approaches to this issue.  And while SaaS and other “as a service” arrangements will grow in importance, perhaps the entire process will spawn MaaS, or “Metaverse as a Service.”
  • Open Source – Open source, already ubiquitous, promises to play a huge role in metaverse development by allowing developers to improve on what has come before. Whether or not the obligations of common open source licenses will be triggered will depend on the technical details of implementation. It is also possible that new open source licenses will be created to contemplate development for the metaverse.
  • Quantum Computing – Quantum computing has dramatically increased the capabilities of computers and is likely to continue to do over the coming years. It will certainly be one of the technologies deployed to provide the computing speed to allow the metaverse to function. However, with the awesome power of quantum computing comes threats to certain legacy protections we use today. Passwords and traditional security protocols may be meaningless (requiring the development of post-quantum cryptography that is secure against both quantum and traditional computers). With raw, unchecked quantum computing power, the metaverse may be subject to manipulation and misuse. Regulation of quantum computing, as applied to the metaverse and elsewhere, may be needed.
  • Antitrust: Collaboration is a key to the success of the metaverse, as it is, by definition, a multi-tenant environment. Of course collaboration amongst competitors may invoke antitrust concerns. Also, to the extent that larger technology companies may be perceived as leveraging their position to assert unfair control in any virtual world, there may be additional concerns.
  • Intellectual Property Issues: A host of IP issues will certainly arise, including infringement, licensing (and breaches thereof), IP protection and anti-piracy efforts, patent issues, joint ownership concerns, safe harbors, potential formation of patent cross-licensing organizations (which also may invoke antitrust concerns), trademark and advertising issues, and entertaining new brand licensing opportunities. The scope of content and technology licenses will have to be delicately negotiated with forethought to the potential breadth of the metaverse (e.g., it’s easy to limit a licensee’s rights based on territory, for example, but what about for a virtual world with no borders or some borders that haven’t been drawn yet?). Rightsholders must also determine their particular tolerance level for unauthorized digital goods or creations. One can envision a need for a DMCA-like safe harbor and takedown process for the metaverse. Also, akin to the litigation that sprouted from the use of athletes’ or celebrities’ likenesses (and their tattoos) in videogames, it’s likely that IP issues and rights of publicity disputes will go way up as people’s virtual avatars take on commercial value in ways that their real human selves never did.
  • Content Moderation. Section 230 of the Communications Decency Act (CDA) has been the target of bipartisan criticism for several years now, yet it remains in effect despite its application in some distasteful ways. How will the CDA be applied to the metaverse, where the exchange of third party content is likely to be even more robust than what we see today on social media?  How will “bad actors” be treated, and what does an account termination look like in the metaverse? Much like the legal issues surrounding offensive content present on today’s social media platforms, and barring a change in the law, the same kinds of issues surrounding user-generated content will persist and the same defenses under Section 230 of the Communications Decency Act will be raised.
  • Blockchain, DAOs, Smart Contract and Digital Assets: Since the metaverse is planned as a single forum with disparate operators and users, the use of a blockchain (or blockchains) would seem to be one solution to act as a trusted, immutable ledger of virtual goods, in-world currencies and identity authentication, particularly when interactions may be somewhat anonymous or between individuals who may or may not trust each other and in the absence of a centralized clearinghouse or administrator for transactions. The use of smart contracts may be pervasive in the metaverse.  Investors or developers may also decide that DAOs (decentralized autonomous organizations) can be useful to crowdsource and fund opportunities within that environment as well.  Overall, a decentralized metaverse with its own discrete economy would feature the creation, sale and holding of sovereign digital assets (and their free use, display and exchange using blockchain-based payment networks within the metaverse). This would presumably give NFTs a role beyond mere digital collectibles and investment opportunities as well as a role for other forms of digital currency (e.g., cryptocurrency, utility tokens, stablecoins, e-money, virtual “in game” money as found in some videogames, or a system of micropayments for virtual goods, services or experiences).  How else will our avatars be able to build a new virtual wardrobe for what is to come?

With this shift to blockchain-based economic structures comes the potential regulatory issues behind digital currencies. How will securities laws view digital assets that retain and form value in the metaverse?  Also, as in life today, visitors to the metaverse must be wary of digital currency schemes and meme coin scams, with regulators not too far behind policing the fraudsters and unlawful actors that will seek opportunities in the metaverse. While regulators and lawmakers are struggling to keep up with the current crop of issues, and despite any progress they may make in that regard, many open issues will remain and new issues will be of concern as digital tokens and currency (and the contracts underlying them) take on new relevance in a virtual world.

Big ideas are always exciting. Watching the metaverse come together is no different, particularly as it all is happening alongside additional innovations surrounding the web, blockchain and cryptocurrency (and, more than likely, updated laws and regulations). However, it’s still early. And we’ll have to see if the current vision of the metaverse will translate into long-term, concrete commercial and civic-minded opportunities for businesses, service providers, developers and individual artists and creators.  Ultimately, these parties will need to sort through many legal issues, both novel and commonplace, before creating and participating in a new virtual world concept that goes beyond the massive multi-user videogame platforms and virtual worlds we have today.

Article By Jeffrey D. Neuburger of Proskauer Rose LLP. Co-authored by  Jonathan Mollod.

For more legal news regarding data privacy and cybersecurity, click here to visit the National Law Review.

© 2021 Proskauer Rose LLP.

Privacy Tip #309 – Women Poised to Fill Gap of Cybersecurity Talent

I have been advocating for gender equality in Cybersecurity for years [related podcast and post].

The statistics on the participation of women in the field of cybersecurity continue to be bleak, despite significant outreach efforts, including “Girls Who Code” and programs to encourage girls to explore STEM (Science, Technology, Engineering and Mathematics) subjects.

Women are just now rising to positions from which they can help other women break into the field, land high-paying jobs, and combat the dearth of talent in technology. Judy Dinn, the new Chief Information Officer of TD Bank NA, is doing just that. One of her priorities is to encourage women to pursue tech careers. She recently told the Wall Street Journal that she “really, really always wants to make sure that female representation—whether they’re in grade school, high school, universities—that that funnel is always full.”

The Wall Street Journal article states that a study by AnitaB.org found that “women made up about 29% of the U.S. tech workforce in 2020.”  It is well known that companies are fighting for tech and cybersecurity talent and that there are many more open positions than talent to fill them. The tech and cybersecurity fields are growing with unlimited possibilities.

This is where women should step in. With increased support, and prioritized recruiting efforts that encourage women to enter fields focused on technology, we can tap more talent and begin to fill the gap of cybersecurity talent in the U.S.

Article By Linn F. Freedman of Robinson & Cole LLP

For more privacy and cybersecurity legal news, click here to visit the National Law Review.

Copyright © 2021 Robinson & Cole LLP. All rights reserved.

Continuing Effort to Protect National Security Data and Networks

CMMC 2.0 – Simplification and Flexibility of DoD Cybersecurity Requirements

Evolving and increasing threats to U.S. defense data and national security networks have necessitated changes and refinements to U.S. regulatory requirements intended to protect such.

In 2016, the U.S. Department of Defense (DoD) issued a Defense Federal Acquisition Regulation Supplement (DFARs) intended to better protect defense data and networks. In 2017, DoD began issuing a series of memoranda to further enhance protection of defense data and networks via Cybersecurity Maturity Model Certification (CMMC). In December 2019, the Department of State, Directorate of Defense Trade Controls (DDTC) issued long-awaited guidance in part governing the minimum encryption requirements for storage, transport and/or transmission of controlled but unclassified information (CUI) and technical defense information (TDI) otherwise restricted by ITAR.

DFARs initiated the government’s efforts to protect national security data and networks by implementing specific NIST cyber requirements for all DoD contractors with access to CUI, TDI or a DoD network. DFARs was self-compliant in nature.

CMMC provided a broad framework to enhance cybersecurity protection for the Defense Industrial Base (DIB). CMMC proposed a verification program to ensure that NIST-compliant cybersecurity protections were in place to protect CUI and TDI that reside on DoD and DoD contractors’ networks. Unlike DFARs, CMMC initially required certification of compliance by an independent cybersecurity expert.

The DoD has announced an updated cybersecurity framework, referred to as CMMC 2.0. The announcement comes after a months-long internal review of the proposed CMMC framework. It still could take nine to 24 months for the final rule to take shape. But for now, CMMC 2.0 promises to be simpler to understand and easier to comply with.

Three Goals of CMMC 2.0

Broadly, CMMC 2.0 is similar to the earlier-proposed framework. Familiar elements include a tiered model, required assessments, and contractual implementation. But the new framework is intended to facilitate three goals identified by DoD’s internal review.

  • Simplify the CMMC standard and provide additional clarity on cybersecurity regulations, policy, and contracting requirements.
  • Focus on the most advanced cybersecurity standards and third-party assessment requirements for companies supporting the highest priority programs.
  • Increase DoD oversight of professional and ethical standards in the assessment ecosystem.

Key Changes under CMMC 2.0

The most impactful changes of CMMC 2.0 are

  • A reduction from five to three security levels.
  • Reduced requirements for third-party certifications.
  • Allowances for plans of actions and milestones (POA&Ms).

CMMC 2.0 has only three levels of cybersecurity

An innovative feature of CMMC 1.0 had been the five-tiered model that tailored a contractor’s cybersecurity requirements according to the type and sensitivity of the information it would handle. CMMC 2.0 keeps this model, but eliminates the two “transitional” levels in order to reduce the total number of security levels to three. This change also makes it easier to predict which level will apply to a given contractor. At this time, it appears that:

  • Level 1 (Foundational) will apply to federal contract information (FCI) and will be similar to the old first level;
  • Level 2 (Advanced) will apply to controlled unclassified information (CUI) and will mirror NIST SP 800-171 (similar to, but simpler than, the old third level); and
  • Level 3 (Expert) will apply to more sensitive CUI and will be partly based on NIST SP 800-172 (possibly similar to the old fifth level).

Significantly, CMMC 2.0 focuses on cybersecurity practices, eliminating the few so-called “maturity processes” that had baffled many DoD contractors.

CMMC 2.0 relieves many certification requirements

Another feature of CMMC 1.0 had been the requirement that all DoD contractors undergo third-party assessment and certification. CMMC 2.0 is much less ambitious and allows Level 1 contractors — and even a subset of Level 2 contractors — to conduct only an annual self-assessment. It is worth noting that a subset of Level 2 contractors — those having “critical national security information” — will still be required to seek triennial third-party certification.

CMMC 2.0 reinstitutes POA&Ms

An initial objective of CMMC 1.0 had been that — by October 2025 — contractual requirements would be fully implemented by DoD contractors. There was no option for partial compliance. CMMC 2.0 reinstitutes a regime that will be familiar to many, by allowing for submission of Plans of Actions and Milestones (POA&Ms). The DoD still intends to specify a baseline number of non-negotiable requirements. But a remaining subset will be addressable by a POA&M with clearly defined timelines. The announced framework even contemplates waivers “to exclude CMMC requirements from acquisitions for select mission-critical requirements.”

Operational takeaways for the defense industrial base

For many DoD contractors, CMMC 2.0 will not significantly impact their required cybersecurity practices — for FCI, focus on basic cyber hygiene; and for CUI, focus on NIST SP 800-171. But the new CMMC 2.0 framework dramatically reduces the number of DoD contractors that will need third-party assessments. It could also allow contractors to delay full compliance through the use of POA&Ms beyond 2025.

Increased Risk of Enforcement

Regardless of the proposed simplicity and flexibility of CMMC 2.0, DoD contractors need to remain vigilant to meet their respective CMMC 2.0 level cybersecurity obligations.

Immediately preceding the CMMC 2.0 announcement, the U.S. Department of Justice (DOJ) announced a new Civil Cyber-Fraud Initiative on October 6 to combat emerging cyber threats to the security of sensitive information and critical systems. In its announcement, the DOJ advised that it would pursue government contractors who fail to follow required cybersecurity standards.

As Bradley has previously reported in more detail, the DOJ plans to utilize the False Claims Act to pursue cybersecurity-related fraud by government contractors or involving government programs, where entities or individuals, put U.S. information or systems at risk by knowingly:

  • Providing deficient cybersecurity products or services
  • Misrepresenting their cybersecurity practices or protocols, or
  • Violating obligations to monitor and report cybersecurity incidents and breaches.

The DOJ also expressed their intent to work closely on the initiative with other federal agencies, subject matter experts and its law enforcement partners throughout the government.

As a result, while CMMC 2.0 will provide some simplicity and flexibility in implementation and operations, U.S. government contractors need to be mindful of their cybersecurity obligations to avoid new heightened enforcement risks.

© 2021 Bradley Arant Boult Cummings LLP

For more articles about cybersecurity, visit the NLR Cybersecurity, Media & FCC section.

Legal Implications of Facebook Hearing for Whistleblowers & Employers – Privacy Issues on Many Levels

On Sunday, October 3rd, Facebook whistleblower Frances Haugen publicly revealed her identity on the CBS television show 60 Minutes. Formerly a member of Facebook’s civic misinformation team, she previously reported them to the Securities and Exchange Commission (SEC) for a variety of concerning business practices, including lying to investors and amplifying the January 6th Capitol Hill attack via Facebook’s platform.

Like all instances of whistleblowing, Ms. Haugen’s actions have a considerable array of legal implications — not only for Facebook, but for the technology sectors and for labor practices in general. Especially notable is the fact that Ms. Haugen reportedly signed a confidentiality agreement or sometimes call a non-disclosure agreement (NDA) with Facebook, which may complicate the legal process.

What are the Legal Implications of Breaking a Non-Disclosure Agreement?

After secretly copying thousands of internal documents and memos detailing these practices, Ms. Haugen left Facebook in May, and testified before a Senate subcommittee on October 5th.  By revealing information from the documents she took, Facebook could take legal action against Ms. Haugen if they accuse her of stealing confidential information from them. Ms. Haugen’s actions raise questions of the enforceability of non-disclosure and confidentiality agreements when it comes to filing whistleblower complaints.

“Paradoxically, Big Tech’s attack on whistleblower-insiders is often aimed at the whistleblower’s disclosure of so-called confidential inside information of the company.  Yet, the very concerns expressed by the Facebook whistleblower and others inside Big Tech go to the heart of these same allegations—violations of privacy of the consuming public whose own personal data has been used in a way that puts a target on their backs,” said Renée Brooker, a partner with Tycko & Zavareei LLP, a law firm specializing in representing whistleblowers.

Since Ms. Haugen came forward, Facebook stated they will not be retaliating against her for filing a whistleblower complaint. It is unclear whether protections from legal action extend to other former employees, as is the case with Ms. Haugen.

Other employees like Frances Haugen with information about corporate or governmental misconduct should know that they do not have to quit their jobs to be protected. There are over 100 federal laws that protect whistleblowers – each with its own focus on a particular industry, or a particular whistleblower issue,” said Richard R. Renner of Kalijarvi, Chuzi, Newman & Fitch, PC, a long-time employment lawyer.

According to the Wall Street Journal, Ms. Haugen’s confidentiality agreement permits her to disclose information to regulators, but not to share proprietary information. A tricky balancing act to navigate.

“Big Tech’s attempt to silence whistleblowers are antithetical to the principles that underlie federal laws and federal whistleblower programs that seek to ferret out illegal activity,” Ms. Brooker said. “Those reporting laws include federal and state False Claims Acts, and the SEC Whistleblower Program, which typically feature whistleblower rewards and anti-retaliation provisions.”

Legal Implications for Facebook & Whistleblowers

Large tech organizations like Facebook have an overarching influence on digital information and how it is shared with the public. Whistleblowers like Ms. Haugen expose potential information about how companies accused of harmful practices act against their own consumers, but also risk disclosing proprietary business information which may or may not be harmful to consumers.

Some of the most significant concerns Haugen expressed to Congress were the tip of the iceberg according to those familiar with whistleblowing reports on Big Tech. Aside from the burden of proof required for such releases to Congress, the threats of employer retaliation and legal repercussions may prevent internal concerns from coming to light.

“Facebook should not be singled out as a lone actor. Big Tech needs to be held accountable and insiders can and should be encouraged to come forward and be prepared to back up their allegations with hard evidence sufficient to allow governments to conduct appropriate investigations,’ Ms. Brooker said.

As the concern for cybersecurity and data protection continues to hold public interest, more whistleblower disclosures against Big Tech and other companies could hold them accountable are coming to light.

During Haugen’s testimony during  the October 5, 2021 Congressional hearing revealed a possible expanding definition of media regulation versus consumer censorship. Although these allegations were the latest against a large company such as Facebook, more whistleblowers may continue to come forward with similar accusations, bringing additional implications for privacy, employment law and whistleblower protections.

“The Facebook whistleblower’s revelations have opened the door just a crack on how Big Tech is exploiting American consumers,” Ms. Brooker said.

This article was written by Rachel Popa, Chandler Ford and Jessica Scheck of the National Law Review. To read more articles about privacy, please visit our cybersecurity section.

Ransom Demands: To Pay or Not to Pay?

As the threat of ransomware attacks against companies has skyrocketed, so has the burden on companies forced to decide whether to pay cybercriminals a ransom demand. Corporate management increasingly is faced with balancing myriad legal and business factors in making real-time, high-stakes “bet the company” decisions with little or no precedent to follow. In a recent advisory, the U.S. Department of the Treasury (Treasury) has once again discouraged companies from making ransom payments or risk potential sanctions.

OFAC Ransom Advisory

On September 21, 2021, the Treasury’s Office of Foreign Assets Control (OFAC) issued an Advisory that updates and supersedes OFAC’s Advisory on Potential Sanctions Risks for Facilitating Ransomware Payments, issued on October 1, 2020. This updated OFAC Advisory follows on the heels of the Biden Administration’s heightened interest in combating the growing risk and reality of cyber threats that may adversely impact national security and the economy.

According to Federal Bureau of Investigation (FBI) statistics from 2019 to 2020 on ransomware attacks, there was a 21 percent increase in reported ransomware attacks and a 225 percent increase in associated losses. All organizations across all industry sectors in the private and public arenas are potential targets of such attacks. As noted by OFAC, cybercriminals often target particularly vulnerable entities, such as schools and hospitals, among others.

While some cybercriminals are linked to foreign state actors primarily motivated by political interests, many threat actors are simply in it “for the money.” Every day cybercriminals launch ransomware attacks to wreak havoc on vulnerable organizations, disrupting their business operations by encrypting and potentially stealing their data. These cybercriminals often demand ransom payments in the millions of dollars in exchange for a “decryptor” key to unlock encrypted files and/or a “promise” not to use or publish stolen data on the Dark Web.

The recent OFAC Advisory states in no uncertain terms that the “U.S. government strongly discourages all private companies and citizens from paying ransom or extortion demands.” OFAC notes that such ransomware payments could be “used to fund activities adverse to the national security and foreign policy objectives of the United States.” The Advisory further states that ransom payments may perpetuate future cyber-attacks by incentivizing cybercriminals. In addition, OFAC cautions that in exchange for payments to cybercriminals “there is no guarantee that companies will regain access to their data or be free from further attacks.”

The OFAC Advisory also underscores the potential risk of violating sanctions associated with ransom payments by organizations. As a reminder, various U.S. federal laws, including the International Emergency Economic Powers Act and the Trading with the Enemy Act, prohibit U.S. persons or entities from engaging in financial or other transactions with certain blacklisted individuals, organizations or countries – including those listed on OFAC’s Specially Designated Nationals and Blacked Persons List or countries subject to embargoes (such as Cuba, the Crimea region of the Ukraine, North Korea and Syria).

Penalties & Mitigating Factors

If a ransom payment is deemed to have been made to a cybercriminal with a nexus to a blacklisted organization or country, OFAC may impose civil monetary penalties for violations of sanctions based on strict liability, even if a person or organization did not know it was engaging in a prohibited transaction.

However, OFAC will consider various mitigating factors in deciding whether to impose penalties against organizations for sanctioned transactions, including if the organizations adopted enhanced cybersecurity practices to reduce the risk of cyber-attacks, or promptly reported ransomware attacks to law enforcement and regulatory authorities (including the FBI, U.S. Secret Service and/or Treasury’s Office of Cybersecurity and Critical Infrastructure Protection).

“OFAC also will consider a company’s full and ongoing cooperation with law enforcement both during and after a ransomware attack” as a “significant” mitigating factor. In encouraging organizations to self-report ransomware attacks to federal authorities, OFAC notes that information shared with law enforcement may aid in tracking cybercriminals and disrupting or preventing future attacks.

Conclusion

In short, payment of a ransom is not illegal per se, so long as the transaction does not involve a sanctioned party on OFAC’s blacklist. Moreover, the recent ransomware Advisory “is explanatory only and does not have the force of law.” Nonetheless, organizations should consider carefully OFAC’s advice and guidance in deciding whether to pay a ransom demand.

In addition to the OFAC Advisory, management should consider the following:

  • Ability to restore systems from viable (unencrypted) backups

  • Marginal time savings in restoring systems with a decryptor versus backups

  • Preservation of infected systems in order to conduct a forensics investigation

  • Ability to determine whether data was accessed or exfiltrated (stolen)

  • Reputational harm if data is published by the threat actor

  • Likelihood that the organization will be legally required to notify individuals of the attack regardless of whether their data is published on the Dark Web.

Should an organization decide it has no choice other than to make a ransom payment, it should facilitate the transaction through a reputable company that first performs and documents an OFAC sanctions check.

© 2021 Wilson Elser

For more articles about ransomware attacks, visit the NLR Cybersecurity, Media & FCC section.

Privilege Dwindles for Data Breach Reports

Data privacy lawyers and cyber security incident response professionals are losing sleep over the growing number of federal courts ordering disclosure of post-data breach forensic reports.  Following the decisions in Capital One and Clark Hill, another district court has recently ordered the defendant in a data breach litigation to turn over the forensic report it believed was protected under the attorney-client privilege and work product doctrines. These three decisions help underscore that maintaining privilege over forensic reports may come down to the thinnest of margins—something organizations should keep in mind given the ever-increasing risk of litigation that can follow a cybersecurity incident.

In May 2019, convenience store and gas station chain Rutter’s received two alerts signaling a possible breach of their internal systems. The same day, Rutter’s hired outside counsel to advise on potential breach notification obligations. Outside counsel immediately hired a forensic investigator to perform an analysis to determine the character and scope of the incident. Once litigation ensued, Rutter’s withheld the forensic report from production on the basis of the attorney-client privilege and work product doctrines. Rutter’s argued that both itself and outside counsel understood the report to be privileged because it was made in anticipation of litigation. The Court rejected this notion.

With respect to the work product doctrine, the Court stated that the doctrine only applies where identifiable or impending litigation is the “primary motivating purpose” of creating the document. The Court found that the forensic report, in this case, was not prepared for the prospect of litigation. The Court relied on the forensic investigator’s statement of work which stated that the purpose of the investigation was to “determine whether unauthorized activity . . . resulted in the compromise of sensitive data.” The Court decided that because Rutter’s did not know whether a breach had even occurred when the forensic investigator was engaged, it could not have unilaterally believed that litigation would result.

The Court was also unpersuaded by the attorney-client privilege argument. Because the forensic report only discussed facts and did not involve “opinions and tactics,” the Court held that the report and related communications were not protected by the attorney-client privilege. The Court emphasized that the attorney-client privilege does not protect communications of fact, nor communications merely because a legal issue can be identified.

The Rutter’s decision comes on the heels of the Capital One and Clark Hill rulings, which both held that the defendants failed to show that the forensic reports were prepared solely in anticipation of litigation. In Capital One, the company hired outside counsel to manage the cybersecurity vendor’s investigation after the breach, however, the company already had a longstanding relationship and pre-existing agreement with the vendor. The Court found that the vendor’s services and the terms of its new agreement were essentially the same both before and after the outside counsel’s involvement. The Court also relied on the fact that the forensic report was eventually shared with Capital One’s internal response team, demonstrating that the report was created for various business purposes.

In response to the data breach in the Clark Hill case, the company hired a vendor to investigate and remediate the systems after the attack. The company also hired outside counsel, who in turn hired a second cybersecurity vendor to assist with litigation stemming from the attack. During the litigation, the company refused to turn over the forensic report prepared by the outside counsel’s vendor. The Court rejected this “two-track” approach finding that the outside counsel’s vendor report has not been prepared exclusively for use in preparation for litigation. Like in Capital One, the Court found, among other things, that the forensic report was shared not only with inside and outside counsel, but also with employees inside the company, IT, and the FBI.

As these cases demonstrate, the legal landscape around responding to security incidents has become filled with traps for the unwary.  A coordinated response led by outside counsel is key to mitigating a data breach and ensuring the lines are not blurred between “ordinary course of business” factual reports and incident reports that are prepared for litigation purposes.

© 2021 Bracewell LLP

Fore more articles on cybersecurity, visit the NLR Communications, Media, Internet, and Privacy Law News section.

Ransomware Payments Can Lead to Sanctions and Reporting Obligations for Financial Institutions

With cybercrime on the rise, two U.S. Treasury Department components, the Office of Foreign Assets Control (“OFAC”) and the Financial Crimes Enforcement Network (“FinCEN”), issued advisories on one of the most insidious forms of cyberattack – ransomware.

Ransomware is a form of malicious software designed to block access to a system or data.  The targets of ransomware attacks are required to pay a ransom to regain access to their information or system, or to prevent the publication of their sensitive information.  Ransomware attackers usually demand payment in the form of convertible virtual currency (“CVC”), which can be more difficult to trace.  Although ransomware attacks were already on the rise (there was a 37% annual increase in reported cases and a 147% increase in associated losses from 2018 to 2019), the COVID19 pandemic has exacerbated the problem, as cyber actors target online systems that U.S. persons rely on to continue conducting business.

OFAC

The OFAC advisory focuses on the potential sanctions risks for those companies and financial institutions that are involved in ransomware payments to bad actors, including ransomware victims and those acting on their behalf, such as “financial institutions, cyber insurance firms, and companies involved in digital forensics and incident response.”  OFAC stresses that these payments may violate US sanctions laws or OFAC regulations, and encourage future attacks.

OFAC maintains a consolidated list of sanctioned persons, which includes numerous malicious cyber actors and the digital currency addresses connected to them.[1]  Any payment to those organizations or their digital currency wallets or addresses, including the payment of a ransom itself, is a violation of economic sanctions laws regardless of whether the parties involved in the payment knew or had reason to know that the transaction involved a sanctioned party.  The advisory states that “OFAC has imposed, and will continue to impose, sanctions on these actors and others who materially assist, sponsor, or provide financial, material, or technological support for these activities.”

In addition to violating sanctions laws, OFAC warned that ransomware payments with a sanctions nexus threaten national security interests.  These payments enable criminals to profit and advance their illicit aims, including funding activities adverse to U.S. national security and foreign policy objectives.  Ransomware payments also embolden cyber criminals and provide no guarantee that the victim will regain access to their stolen data.

Any payment to those organizations or their digital currency wallets or addresses, including the payment of a ransom itself, is a violation of economic sanctions laws regardless of whether the parties involved in the payment knew or had reason to know that the transaction involved a sanctioned party.

OFAC encourages financial institutions to implement a risk-based compliance program to mitigate exposure to potential sanctions violations.  Accordingly, these sanctions compliance programs should account for the risk that a ransomware payment may involve a Specially Designated National, blocked person, or embargoed jurisdiction.  OFAC encouraged victims of ransomware attacks to contact law enforcement immediately, and listed the contact information for relevant government agencies.  OFAC wrote that it considers the “self-initiated, timely, and complete report of a ransomware attack to law enforcement to be a significant mitigating factor in determining an appropriate enforcement outcome if the situation is later determined to have a sanctions nexus.”  OFAC will also consider a company’s cooperation efforts both during and after the ransomware attack when evaluating a possible outcome.

Such cooperation may also be a “significant mitigating factor” in determining whether and to what extent enforcement is necessary.

FinCEN

FinCEN’s advisory also encourages entities that process payments potentially related to ransomware to report to and cooperate with law enforcement.  The FinCEN advisory arms these institutions with information about the role of financial intermediaries in payments, ransomware trends and typologies, related financial red flags, and effective reporting and information sharing related to ransomware attacks.

According to FinCEN, ransomware attacks are growing in size, scope, and sophistication.  The attacks have increasingly targeted larger enterprises for bigger payouts, and cybercriminals are sharing resources to increase the effectiveness of their attacks.  The demand for payment in anonymity-enhanced cryptocurrencies has also been on the rise.

FinCEN touted “[p]roactive prevention through effective cyber hygiene, cybersecurity controls, and business continuity resiliency” as the best ransomware defense.  The advisory lists numerous red flags designed to assist financial institutions in detecting, preventing, and ultimately reporting suspicious transactions associated with ransomware payments.  These red flags include, among others: (1) IT activity that shows the existence of ransomware software, including system log files, network traffic, and file information; (2) a customer’s CVC address that appears on open sources or is linked to past ransomware attacks; (3) transactions that occur between a high-risk organization and digital forensics and incident response companies or cyber insurance companies; and (4) customers that request payment in CVC, but show limited knowledge about the form of currency.

Finally, FinCEN reminded financial institutions about their obligations under the Bank Secrecy Act to report suspicious activity, including ransomware payments.  A financial institution is required to file a suspicious activity report (“SAR”) with FinCEN if it knows, suspects, or has reason to suspect that the attempted or completed transaction involves $5,000 or more derived from illegal activity.  “Reportable activity can involve transactions . . . related to criminal activity like extortion and unauthorized electronic intrusions,” the advisory says.  Given this, suspected ransomware payments and attempted payments should be reported to FinCEN in SARs.  The advisory provides information on how financial institutions and others should report and share the details related to ransomware attacks to increase the utility and effectiveness of the SARs.  For example, those filing ransomware-related SARs should provide all pertinent available information.  In keeping with FinCEN’s previous guidance on SAR filings relating to cyber-enabled crime, FinCEN expects SARs to include detailed cyber indicators.  Information, including “relevant email addresses, Internet Protocol (IP) addresses with their respective timestamps, virtual currency wallet addresses, mobile device information (such as device International Mobile Equipment Identity (IMEI) numbers), malware hashes, malicious domains, and descriptions and timing of suspicious electronic communications,” will assist FinCEN in protecting the U.S. financial system from ransomware threats.

[1] https://home.treasury.gov/news/press-releases/sm556


© Copyright 2020 Squire Patton Boggs (US) LLP
For  more articles on cybersecurity, visit the National Law Review Communications, Media & Internet section.

Reasons for Communicating Clearly With Your Insurer Regarding the Scope of Coverage Before Purchasing Cyber Insurance

Purchasing cyber insurance is notoriously complex—standard form policies do not currently exist, many key terms setting the scope of coverage have not been analyzed by courts, and cyber risks are complicated and constantly evolving.  Given these complexities, prospective policyholders should consider, before purchasing a cyber policy, communicating their expectations for coverage in clear and specific terms to their insurer.  Such communications, which can be conducted through an insurance broker, can help a policyholder obtain policy terms that accurately reflect their desired coverage.  Additionally, these communications create a written record of the contracting parties’ understanding, which may prove useful should the insurer later contend that coverage is not available consistent with these discussions and the policyholder’s expectations.

Singling out a key policy provision and examining the coverage issues that provision can present helps illustrate the potential value of such communication.  Currently, the high-profile Mondelez International, Inc. v. Zurich American Insurance Co. litigation provides an excellent opportunity to examine the coverage issues that can arise from one such provision:  the so-called “war exclusion.”  This exclusion, a variant of which is included in almost every insurance policy by insurers seeking to limit their exposure to potentially catastrophic losses that might result from war, may sound straightforward but can be difficult to apply, as the line between war and other conflicts is often fuzzy and fact-specific.  Compare In re Sept. 11 Litig., 931 F. Supp. 2d 496, 508 (S.D.N.Y. 2013), aff’d, 751 F.3d 86 (2d Cir. 2014) (concluding that the September 11, 2001 attack by Al Qaeda was an “act of war”), with Pan Am. World Airways, Inc. v. Aetna Cas. & Sur. Co., 505 F.2d 989, 1015 (2d Cir. 1974) (holding that the hijacking of an airplane by the Popular Front for the Liberation of Palestine was not the result of “war”).  This is especially true in the cyber context, where understanding the precise nature and purpose of a cyber attack is often difficult.  While the Mondelez case does not involve a dedicated cyber insurance policy—it concerns a property insurance policy that includes coverage for “physical loss or damage to electronic data, programs, or software, including physical loss or damage caused by the malicious introduction of a machine code or instruction”—it is still instructive because the insured seeks coverage for a cyber attack and the insurer disputes coverage based on the war exclusion, which almost all cyber insurance policies contain in some fashion.

The dispute in Mondelez arose when the policyholder suffered over one hundred million dollars in losses due to network disruptions caused by the NotPetya ransomware attack and sought coverage under their property insurance policy for “physical loss or damage to electronic data, programs, or software . . . .”  See Complaint, Mondelez International, Inc. v. Zurich American Insurance Co., No. 2018L011008, 2018 WL 4941760 (Ill. Cir. Ct., Oct. 10, 2018).  In response, the insurer denied coverage based on the war exclusion that precluded coverage for “loss or damage directly or indirectly caused by or resulting from . . . hostile or warlike action in time of peace or war, including action in hindering, combatting or defending against an actual, impending or expected attack by any:  (i) government or sovereign power (de jure or de facto); (ii) military, naval, or air force; or (iii) agent or authority of any party specified in i or ii above.”  In short, the policyholder believed it bought broad coverage for ransomware attacks, but now must litigate whether the NotPetya attack was a “warlike action” by a government “agent,” under circumstances where numerous sources link the cyber attack to Russia and its armed forces (though Russia denies any involvement).  While the Mondelez case is still in the early stages, and details of any communications among the parties regarding the wording and meaning of the war exclusion are not publicly known, the mere existence of this litigation highlights the challenges that can face a policyholder who learns only after a substantial loss that their insurer reads a key policy provision to preclude coverage that the policyholder expected to be available.

As noted above, communication prior to policy placement can be a valuable tool to secure clear wording for key policy provisions and potentially avoid this kind of situation.  While this may seem obvious, such communication is often overlooked by policyholders more focused on other policy details like limits and premiums.  A close review of the war exclusion helps illustrate the potential benefits of these communications.  While the precise phrasing of the war exclusion at issue in Mondelez is more typical of property policies than cyber policies, war exclusions in many cyber policies arguably apply to conduct not only by state actors but also by quasi-state actors or groups with political motives.  For this reason, policyholders may want to seek language specifying that the exclusion only applies to acts by a military force or a sovereign nation, as many cyber attacks are attributed to quasi-state actors or non-state groups with political ends, or are the subject of debated attribution.  Similarly, some war exclusions apply not only to specified conflicts such as war, invasion, and mutiny, but also to more amorphous conduct like “warlike actions”—policyholders seeking greater certainty may wish to avoid such language.  Further, as with any exclusion, avoiding overbroad introductory language (like that excluding any loss “in any way related to or arising out of” war) is generally in a policyholder’s interest.  And even if a war exclusion is broadly worded, some insurers will include a carve-back creating an exception for losses due to attacks on computer systems or breaches of network security, thus preserving cyber coverage even when the war exclusion might otherwise apply.  Given the impact that small changes in wording can have on the scope of coverage, communicating clearly—with respect to the war exclusion or any other key policy provision—can play a crucial role in assuring that a policyholder secures wording that provides the coverage they desire.  Of course, an insurer may respond to a policyholder by refusing to revise a policy term or insisting that a desired coverage is unavailable, in which case the policyholder has the benefit of understanding a policy’s purported scope prior to purchase and the opportunity to investigate coverage from other insurers.

In addition, communication allows a policyholder to make a record of their expectations as to the scope of coverage, which may prove useful if an insurer later refuses to provide coverage consistent with the expectations that the policyholder conveyed.  Many courts interpreting disputed policy language put substantial weight on an insured’s reasonable expectations and often rely on communications between policyholders and insurers to support a policyholder’s reading.  See, e.g., Monsanto Co. v. Int’l Ins. Co. (EIL), 652 A.2d 36, 39 (Del. 1994); Celley v. Mut. Benefit Health & Acc. Ass’n, 324 A.2d 430, 435 (Pa. Super. 1974); Ponder v. State Farm Mut. Auto. Ins. Co., 12 P.3d 960, 962 (N.M. 2000); Michigan Mutual Liability Co. v. Hoover Bros., Inc., 237 N.E.2d 754, 756 (Ill. App. 1968).  As the recently-issued Restatement of The Law of Liability Insurance observes, where “extrinsic evidence shows that a reasonable person in the policyholder’s position would give the term a different meaning” than the one advanced by the insurer, the policyholder’s proposed meaning will often control.  Another recent case addressing a war exclusion (completely outside the cyber context) demonstrates the role such communications may play in interpreting disputed policy provisions, as the court’s analysis of the exclusion included a review of the communications during the underwriting process between the insured, the broker, and the insurer and an examination of what those communications indicated about the parties’ intent for the exclusion’s application.  Universal Cable Prods., LLC v. Atl. Specialty Ins. Co., 929 F.3d 1143 (9th Cir. 2019).  While contested coverage provisions should generally be read in an insured’s favor so long as that reading is reasonable—even in the absence of favorable underwriting communications—the cases above underscore the potential value in establishing during the underwriting process a record of the insured’s expectations as to the scope of coverage (especially in an area such as cyber insurance, where guidance like prior court decisions is limited).

For these reasons, policyholders should consider clearly communicating their intentions to their insurer when purchasing cyber insurance—this may include communicating not just questions about the scope of coverage and requests for modifications to the policy, but also the concerns animating those questions and the goals behind those requested modifications.  When having such communications with cyber insurers, policyholders will generally want to work closely with an insurance broker knowledgeable about cyber insurance, and may also want to consult experienced coverage counsel.  Clear communication during the underwriting process can play an important role in helping policyholders obtain cyber coverage that will meet their expectations should they one day confront a cyber event.


© 2020 Gilbert LLP

3 Cyberattacks and 3 Practical Measures Lawyers Can Take to Protect Themselves

Hackers are targeting lawyers with cyberattacks, and coronavirus is making things worse. With the recent Covid-19 pandemic and the resultant remote work, hackers are exploiting lawyers with even greater intensity. The ABA Journal recently reported that “scams multiply during the COVID crisis.”

The Top 3 Cyber Attacks Targeting Law Firms

You’re probably displaced from your usual working space and feeling out of whack. That sets the stage for hackers to advantage of the confusion — and your home computer setup. You need to know the traits of the most common cyberthreats so you can identify a scam.

1. Phishing Email Scams

Hackers send phishing emails that impersonate a legit sender and fool the recipient into giving up information. Most phishing scams trick their victims into clicking on malicious URLs. These phishing links redirect the victim to fake sites — most commonly, the spoofed login pages to Office 365 and online baking — and capture their username and password. Now that the hacker has these credentials, they can legitimately access confidential data or withdraw funds.

In 2018, nearly 80% of law firms experienced phishing attacks, according to security research firm Osterman Research. As COVID-19 increases anxiety and the amount of emails in your inbox, hackers have taken advantage. In mid-March 2020, right as COVID-19 ramped up in the United States, hackers purported to be the World Health Organization (WHO). The phishing email asked the victim to open an attachment containing official information on protecting yourself from the coronavirus. Little did they know that opening this attachment downloaded a keystroke logger that records what’s being typed. Keystroke logging is typically used to capture even more login credentials so the hacker can access as many sites and services as possible.

For further details, learn how viral coronavirus scams are attacking computers and smartphones.

2. Ransomware

Ransomeware is one of four of the biggest cybersecurity risks law firms face according to Law Technology Today. This cyberattack is a type of malware that, once installed, denies access to a computer system or data. Typically, email attachments, “malvertising”, or drive-by downloads install ransomware onto devices. To regain access to the compromised device, the victim must wire funds to the hacker. Even if the ransom is paid, it’s not guaranteed that the hackers will restore system access.

3. Data Breaches

Data breaches result in the loss of confidential data or the unauthorized access of that data. They occur after hackers execute a successful phishing or ransomware attack, which are common entry point of a data breach. The loss of this data could have devastating consequences on a law firm. If clients feel that their privacy was violated in the breach, they might sue.

3 Practical Cyberthreat Solutions Law Firms

Law firms can take several practical measures to protect their systems and data. Safeguarding identity and access, encrypting data, and investing in cybersecurity software (if possible) for anti-phishing and anti-malware will lower the risk of a successful cyberattack.

1. Encrypt Data

Lawyers rely on email and document sharing to run their firm. As these documents and communications travel across the internet, they can be intercepted. But when data is encrypted, it is substantially harder for a hacker to intercept. A VPN (Virtual Private Network) encrypts data in a cost-effective, non-intrusive, and reliable way. Creating a secure “tunnel” between your computer and the internet, VPNs protect data using 256-bit encryption. This protocol is so secure that banks and the U.S. government use it to protect classified data.

2. Use Two-Factor Authentication (2FA)

If you’re in the 50% of people who use the same passwords for personal and work accounts, then take note. Weak and reused passwords increase your chances of experiencing a cyberattack. 2FA adds protection to your username and password, making it much harder to compromise your credentials. Think of 2FA as a dynamic, time-sensitive, secondary password.

2FA uses a password alongside a second one-time passcode that is sent to the employee’s device. Unless this code is submitted on the follow-up login screen in a timely manner, it will expire. If codes are not used, then biometric authentication such as a retina or fingerprint scan provide the second factor.

3. Investing in Intelligent IT systems

When dealing with high volumes of very confidential data, you can never be too confident of your online security. The odds are not in your favor: one in four organizations in the US will be breached. And recovering from a breach is pricy. Law firms lose, on average, $4.62 million dollars every data breach. If you worry about the expense of cybersecurity solutions, remember that other number.

You can spend money on anti-phishing, anti-malware, and data loss prevention tools. Or you can not spend the money and risk having to pay a ransom, deal with legal fees, reputational damage, and more. Although it’s a tough pill to swallow in the current economic landscape, preventative security is cheaper than dealing with a breach.

If you cannot afford a cybersecurity system at this time, just update your software whenever you receive a notification. This is the easiest and quickest way to secure your systems. Software updates come with security fixes that will patch any vulnerabilities in your system. Hackers are known to exploit old/known vulnerabilities. Take the time to vet your network or cloud service providers to see what precautions they have to protect your firm from cybercriminals.

You Must Anticipate Cyberattacks on Your Firm 

Law firms possess sensitive data that hackers would love to leverage. Using intelligent IT systems, updating software, encrypting data, and setting up two-factor authentication are the most effective ways that lawyers can protect their data while working remotely during the COVID-19 lockdown.


© Copyright 2020 PracticePanther

ARTICLE BY PracticePanther.
For more legal tech considerations, see the National Law Review Law Office Management section.