Unlike a Fine Wine, Tax Issues Do Not Get Better with Age

In a recent decision, the New York State Tax Appeals Tribunal (“Tribunal”) upheld notices of deficiency issued by the New York State Department of Taxation and Finance (the “Department”) totaling approximately $15 million in additional tax, plus interest and penalties for tax years dating as far back as 2002. In the Matter of the Petition of Cushlin Limited, DTA No. 829939 (TAT Oct. 10, 2024). The notices of deficiency came on the heels of an audit that lasted a decade and at the end of which the Department computed additional corporation franchise tax due “based on the information it had available” inasmuch as the information provided by the company over the 10-year audit was “incomplete and/or unsubstantiated.” This case is a cautionary tale for taxpayers and a reminder that tax issues do not get better with age, and delaying or putting off addressing known issues only makes the situation worse in the end.

The company was a corporation organized under the laws of The Isle of Man and was in the business of acquiring and refurbishing three- and four-star hotels. The company also owned equity interests in 13 limited liability companies (“LLCs”) that were doing business in New York. In 2008, the Department began an audit of one of the LLCs and discovered that it had sold real property in New York but did not file a New York State partnership tax return. As a result of its ownership interest in the LLCs, the Department determined that the company was required to file New York corporation franchise tax returns on which it was required to report the gains and losses of the LLCs. The audit of the company initially covered the tax years 2002 through 2006 and was later expanded to include 2007 through 2009.

From 2010 through 2013, the company and the Department communicated multiple times, and the company repeatedly stated that it was preparing tax returns for the audit years for the LLCs and the company and that it required more time to prepare those returns. In 2013, the company provided the Department with draft tax returns for the company and the LLCs. In May 2016, after another three years passed without final tax returns being filed, the Department informed the company that it was assessing additional corporation franchise tax computed based on the amounts in the draft returns plus interest and penalties. In June 2016, the company filed final tax returns for all of the audit years, and the final returns reflected income and deductions that were larger than the amounts previously included in the draft returns.

The Department then issued three information document requests over the next two years that requested information substantiating the deductions claimed on the filed returns. In response to these requests, the Tribunal found that the company “provided only partial responses that lacked any externally verifiable substantiation” and repeated Department requests were “met with partial, inconclusive responses.” Finally, in 2018, the Department issued the notices of deficiency that assessed the amount of additional corporation franchise tax that the Department had previously computed using the company’s draft returns plus updated additional interest and penalties. The company appealed and the Administrative Law Judge (“ALJ”) sustained the notices finding: (1) that the company had failed to meet its burden of proving that the notices were incorrect; and (2) that penalties were properly imposed as the company also failed to demonstrate that its failure to file timely returns was a result of reasonable cause and not willful neglect.

The Tribunal agreed with the ALJ, concluding that there was a rational basis for the notices because “[w]orking without returns or supporting documentation more than six years after it began, the [Department] used the available information provided by petitioner, verified by other information contained in the [Department’s] own database, to arrive at a computation of tax due from petitioner.” Moreover, the Tribunal reasoned, the Department provided the company with numerous opportunities to substantiate the amounts that the company reported on its filed returns and the company’s failure to provide substantiating information left the Department with “little choice” but to “use another method to arrive at a determination of tax liability.” Finally, the Tribunal concluded that the ALJ correctly determined that penalties were properly imposed as the company did not meet its burden to demonstrate reasonable cause.

Shorter Path to Green Card: New USCIS Guidance for EB-1 Eligibility for Foreign Nationals With Extraordinary Ability

For foreign nationals with “extraordinary ability” in the sciences, arts, education, business or athletics, the path to a green card normally has a much shorter route. The EB-1 extraordinary ability category is a type of employment-based, first-preference visa that has several advantages for a “small percentage of individuals” positioned to prove their expertise within a specific area. As indicated by the elite immigrant visa category, an extraordinary amount of documentation is required to meet the high threshold for EB-1 eligibility.

To provide an example of the evidentiary criteria, this category reserved for individuals with extraordinary ability requires that individuals demonstrate extraordinary ability through sustained national or international acclaim. To do so, applicants must meet at least three of the 10 criteria, or provide evidence of a major one-time achievement, such as a Pulitzer Prize, Oscar, or Olympic medal. In addition, applicants must provide evidence showing that they will continue to work in the area of expertise.

More specifically, the applicant must provide evidence of at least three of the following:

  • Receipt of lesser nationally or internationally recognized prizes or awards for excellence
  • Membership in associations in the field which demand outstanding achievement of their members
  • Published material about the candidate in professional or major trade publications or other major media
  • Judgment of the work of others, either individually or on a panel
  • Original scientific, scholarly, artistic, athletic, or business-related contributions of major significance to the field
  • Authorship of scholarly articles in professional or major trade publications or other major media
  • Display of work at artistic exhibitions or showcases
  • Performance of a leading or critical role in distinguished organizations
  • Command of a high salary or other significantly high remuneration in relation to others in the field
  • Commercial successes in the performing arts

While U.S. Citizenship and Immigration Services (USCIS) has been consistent in detailing the criteria to be demonstrated, the specific evidence deemed acceptable has evolved over the lifetime of this visa category. Last September, USCIS updated its policy manual on employment-based first-preference (EB-1) immigrant petitions in the Extraordinary Ability classification. Specifically, USCIS provided examples of comparable evidence and the way in which USCIS will “consider any potentially relevant evidence.”

To further clarify the acceptable types of evidence, USCIS issued another policy manual update on Oct. 2. The most recent update provided additional clarification, stating:

  • “Confirms that we consider a person’s receipt of team awards under the criterion for lesser nationally or internationally recognized prizes or awards for excellence in the field of endeavor;
  • Clarifies that we consider past memberships under the membership criterion;
  • Removes language suggesting published material must demonstrate the value of the person’s work and contributions to satisfy the published material criterion; and
  • Explains that while the dictionary defines an “exhibition” as a public showing not limited to art, the relevant regulation expressly modifies that term with “artistic,” such that we will only consider non-artistic exhibitions as part of a properly supported claim of comparable evidence.

These clarifications likely will provide more consistency in the adjudication process.

This article was co-authored by Tieranny Cutler, independent contract attorney.

PRIVACY ON ICE: A Chilling Look at Third-Party Data Risks for Companies

An intelligent lawyer could tackle a problem and figure out a solution. But a brilliant lawyer would figure out how to prevent the problem to begin with. That’s precisely what we do here at Troutman Amin. So here is the latest scoop to keep you cool. A recent case in the United States District Court for the Northern District of California, Smith v. Yeti Coolers, L.L.C., No. 24-cv-01703-RFL, 2024 U.S. Dist. LEXIS 194481 (N.D. Cal. Oct. 21, 2024), addresses complex issues surrounding online privacy and the liability of companies who enable third parties to collect and use consumer data without proper disclosures or consent.

Here, Plaintiff alleged that Yeti Coolers (“Yeti”) used a third-party payment processor, Adyen, that collected customers’ personal and financial information during transactions on Yeti’s website. Plaintiff claimed Adyen then stored this data and used it for its own commercial purposes, like marketing fraud prevention services to merchants, without customers’ knowledge or consent. Alarm bells should be sounding off in your head—this could signal a concerning trend in data practices.

Plaintiff sued Yeti under the California Invasion of Privacy Act (“CIPA”) for violating California Penal Code Sections 631(a) (wiretapping) and 632 (recording confidential communications). Plaintiff also brought a claim under the California Constitution for invasion of privacy. The key question here was whether Yeti could be held derivatively liable for Adyen’s alleged wrongful conduct.

So, let’s break this down step by step.

Under the alleged CIPA Section 631(a) violation, the court found that Plaintiff plausibly alleged Adyen violated this Section by collecting customer data as a third-party eavesdropper without proper consent. In analyzing whether Yeti’s Privacy Policy and Terms of Use constituted enforceable agreements, it applied the legal frameworks for “clickwrap” and “browsewrap” agreements.

Luckily, my Contracts professor during law school here in Florida was remarkable, Todd J. Clark, now the Dean of Widner University Delaware Law School. For those who snoozed out during Contracts class during law school, here is a refresher:

Clickwrap agreements present the website’s terms to the user and require the user to affirmatively click an “I agree” button to proceed. Browsewrap agreements simply post the terms via a hyperlink at the bottom of the webpage. For either type of agreement to be enforceable, the Court explained that a website must provide 1) reasonably conspicuous notice of the terms and 2) require some action unambiguously manifesting assent. See Oberstein v. Live Nation Ent., Inc., 60 F.4th 505, 515 (9th Cir. 2023).

The Court held that while Yeti’s pop-up banner and policy links were conspicuous, they did not create an enforceable clickwrap agreement because “Defendant’s pop-up banner does not require individuals to click an “I agree” button, nor does it include any language to imply that by proceeding to use the website, users reasonably consent to Defendant’s terms and conditions of use.” See Smith, 2024 U.S. Dist. LEXIS 194481, at *8. The Court also found no enforceable browsewrap agreement was formed because although the policies were conspicuously available, “Defendant’s website does not require additional action by users to demonstrate assent and does not conspicuously notify them that continuing to use to website constitutes assent to the Privacy Policy and Terms of Use.” Id. at *9.

What is more, the Court relied on Nguyen v. Barnes & Noble Inc., 763 F.3d 1171, 1179 (9th Cir. 2014), which held that “where a website makes its terms of use available via a conspicuous hyperlink on every page of the website but otherwise provides no notice to users nor prompts them to take any affirmative action to demonstrate assent, even close proximity of the hyperlink to relevant buttons users must click on—without more—is insufficient to give rise to constructive notice.” Here, the Court found the pop-up banner and link on Yeti’s homepage presented the same situation as in Nguyen and thus did not create an enforceable browsewrap agreement.

Thus, the Court dismissed the Section 631(a) claim due to insufficient allegations that Yeti was aware of Adyen’s alleged violations.

However, the Court held that to establish Yeti’s derivative liability for “aiding” Adyen under Section 631(a), Plaintiff had to allege facts showing Yeti acted with both knowledge of Adyen’s unlawful conduct and the intent or purpose to assist it. It found Plaintiff’s allegations that Yeti was “aware of the purposes for which Adyen collects consumers’ sensitive information because Defendant is knowledgeable of and benefitting from Adyen’s fraud prevention services” and “assists Adyen in intercepting and indefinitely storing this sensitive information” were too conclusory. Smith, 2024 U.S. Dist. LEXIS 194481, at *13. It reasoned: “Without further information, the Court cannot plausibly infer from Defendant’s use of Adyen’s fraud prevention services alone that Defendant knew that Adyen’s services were based on its allegedly illegal interception and storing of financial information, collected during Adyen’s online processing of customers’ purchases.” Id.

Next, the Court similarly found that Plaintiff plausibly alleged Adyen recorded a confidential communication without consent in violation of CIPA Section 632. A communication is confidential under this section if a party “has an objectively reasonable expectation that the conversation is not being overheard or recorded.” Flanagan v. Flanagan, 27 Cal. 4th 766, 776-77 (2002). It explained that “[w]hether a party has a reasonable expectation of privacy is a context-specific inquiry that should not be adjudicated as a matter of law unless the undisputed material facts show no reasonable expectation of privacy.” Smith, 2024 U.S. Dist. LEXIS 194481, at *18-19. At the pleading stage, the Court found Plaintiff’s allegation that she reasonably expected her sensitive financial information would remain private was sufficient.

However, as with the Section 631(a) claim, the Court held that Plaintiff did not plead facts establishing Yeti’s derivative liability under the standard for aiding and abetting liability. Under Saunders v. Superior Court, 27 Cal. App. 4th 832, 846 (1994), the Court explained a defendant is liable if they a) know the other’s conduct is wrongful and substantially assist them or b) substantially assist the other in accomplishing a tortious result and the defendant’s own conduct separately breached a duty to the plaintiff. The Court found that the Complaint lacked sufficient non-conclusory allegations that Yeti knew or intended to assist Adyen’s alleged violation. See Smith, 2024 U.S. Dist. LEXIS 194481, at *16.

Lastly, the Court analyzed Plaintiff’s invasion of privacy claim under the California Constitution using the framework from Hill v. Nat’l Coll. Athletic Ass’n, 7 Cal. 4th 1, 35-37 (1994). For a valid invasion of privacy claim, Plaintiff had to show 1) a legally protected privacy interest, 2) a reasonable expectation of privacy under the circumstances, and 3) a serious invasion of privacy constituting “an egregious breach of the social norms.” Id.

The Court found Plaintiff had a protected informational privacy interest in her personal and financial data, as “individual[s] ha[ve] a legally protected privacy interest in ‘precluding the dissemination or misuse of sensitive and confidential information.”‘ Smith, 2024 U.S. Dist. LEXIS 194481, at *17. It also found Plaintiff plausibly alleged a reasonable expectation of privacy at this stage given the sensitivity of financial data, even if “voluntarily disclosed during the course of ordinary online commercial activity,” as this presents “precisely the type of fact-specific inquiry that cannot be decided on the pleadings.” Id. at *19-20.

Conversely, the Court found Plaintiff did not allege facts showing Yeti’s conduct was “an egregious breach of the social norms” rising to the level of a serious invasion of privacy, which requires more than “routine commercial behavior.” Id. at *21. The Court explained that while Yeti’s simple use of Adyen for payment processing cannot amount to a serious invasion of privacy, “if Defendant was aware of Adyen’s usage of the personal information for additional purposes, this may present a plausible allegation that Defendant’s conduct was sufficiently egregious to survive a Motion to Dismiss.” Id. However, absent such allegations about Yeti’s knowledge, this claim failed.

In the end, the Court dismissed Plaintiff’s Complaint but granted leave to amend to correct the deficiencies, so this case may not be over. The Court’s grant of “leave to amend” signals that if Plaintiff can sufficiently allege Yeti’s knowledge of or intent to facilitate Adyen’s use of customer data, these claims could proceed. As companies increasingly rely on third parties to handle customer data, we will likely see more litigation in this area, testing the boundaries of corporate liability for data privacy violations.

So, what is the takeaway? As a brilliant lawyer, your company’s goal should be to prevent privacy pitfalls before they snowball into costly litigation. Key things to keep in mind are 1) ensure your privacy policies and terms of use are properly structured as enforceable clickwrap or browsewrap agreements, with conspicuous notice and clear assent mechanisms; 2) conduct thorough due diligence on third-party service providers’ data practices and contractual protections; 3) implement transparent data collection and sharing disclosures for informed customer consent; and 4) stay abreast of evolving privacy laws.

In essence, taking these proactive steps can help mitigate the risks of derivative liability for third-party misconduct and, most importantly, foster trust with your customers.

Colorado AG Proposes Draft Amendments to the Colorado Privacy Act Rules

On September 13, 2024, the Colorado Attorney General’s (AG) Office published proposed draft amendments to the Colorado Privacy Act (CPA) Rules. The proposals include new requirements related to biometric collection and use (applicable to all companies and employers that collect biometrics of Colorado residents) and children’s privacy. They also introduce methods by which businesses could seek regulatory guidance from the Colorado AG.

The draft amendments seek to align the CPA with Senate Bill 41, Privacy Protections for Children’s Online Data, and House Bill 1130, Privacy of Biometric Identifiers & Data, both of which were enacted earlier this year and will largely come into effect in 2025. Comments on the proposed regulations can be submitted beginning on September 25, 2024, in advance of a November 7, 2024, rulemaking hearing.

In Depth


PRIVACY OF BIOMETRIC IDENTIFIERS & DATA

In comparison to other state laws like the Illinois Biometric Information Privacy Act (BIPA), the CPA proposed draft amendments do not include a private right of action. That said, the proposed draft amendments include several significant revisions to the processing of biometric identifiers and data, including:

  • Create New Notice Obligations: The draft amendments require any business (including those not otherwise subject to the CPA) that collects biometrics from consumers or employees to provide a “Biometric Identifier Notice” before collecting or processing biometric information. The notice must include which biometric identifier is being collected, the reason for collecting the biometric identifier, the length of time the controller will retain the biometric identifier, and whether the biometric identifier will be disclosed, redisclosed, or otherwise disseminated to a processor alongside the purpose of such disclosure. This notice must be reasonably accessible, either in a standalone disclosure or, if embedded within the controller’s privacy notice, a clear link to the specific section within the privacy notice that contains the Biometric Identifier Notice. This requirement applies to all businesses that collect biometrics, including employers, even if a business does not otherwise trigger the applicability thresholds of the CPA.
  • Revisit When Consent Is Required: The draft amendments require controllers to obtain explicit consent from the data subject before selling, leasing, trading, disclosing, redisclosing, or otherwise disseminating biometric information. The amendments also allow employers to collect and process biometric identifiers as a condition for employment in limited circumstances (much more limited than Illinois’s BIPA, for example).

PRIVACY PROTECTIONS FOR CHILDREN’S ONLINE DATA

The draft amendments also include several updates to existing CPA requirements related to minors:

  • Delineate Between Consumers Based on Age: The draft amendments define a “child” as an individual under 13 years of age and a “minor” as an individual under 18 years of age, creating additional protections for teenagers.
  • Update Data Protection Assessment Requirements: The draft amendments expand the scope of data protection assessments to include processing activities that pose a heightened risk of harm to minors. Under the draft amendments, entities performing assessments must disclose whether personal data from minors is processed as well as identify any potential sources and types of heightened risk to minors that would be a reasonably foreseeable result of offering online services, products, or features to minors.
  • Revisit When Consent Is Required: The draft amendments require controllers to obtain explicit consent before processing the personal data of a minor and before using any system design feature to significantly increase, sustain, or extend a minor’s use of an online service, product, or feature.

OPINION LETTERS AND INTERPRETIVE GUIDANCE

In a welcome effort to create a process by which businesses and the public can understand more about the scope and applicability of the CPA, the draft amendments:

  • Create a Formal Feedback Process: The draft amendments would permit individuals or entities to request an opinion letter from the Colorado AG regarding aspects of the CPA and its application. Entities that have received and relied on applicable guidance offered via an opinion letter may use that guidance as a good faith defense against later claims of having violated the CPA.
  • Clarify the Role of Non-Binding Advice: Separate and in addition to the formal opinion letter process, the draft amendments provide a process by which any person affected directly or indirectly by the CPA may request interpretive guidance from the AG. Unlike the guidance in an opinion letter, interpretive guidance would not be binding on the Colorado AG and would not serve as a basis for a good faith defense. Nonetheless, a process for obtaining interpretive guidance is a novel, and welcome, addition to the state law fabric.

WHAT’S NEXT?

While subject to change pursuant to public consultation, assuming the proposed CPA amendments are finalized, they would become effective on July 1, 2025. Businesses interested in shaping and commenting on the draft amendments should consider promptly submitting comments to the Colorado AG.

Legal and Privacy Considerations When Using Internet Tools for Targeted Marketing

Businesses often rely on targeted marketing methods to reach their relevant audiences. Instead of paying for, say, a television commercial to be viewed by people across all segments of society with varied purchasing interests and budgets, a business can use tools provided by social media platforms and other internet services to target those people most likely to be interested in its ads. These tools may make targeted advertising easy, but businesses must be careful when using them – along with their ease of use comes a risk of running afoul of legal rules and regulations.

Two ways that businesses target audiences are working with influencers who have large followings in relevant segments of the public (which may implicate false or misleading advertising issues) and using third-party “cookies” to track users’ browsing history (which may implicate privacy and data protection issues). Most popular social media platforms offer tools to facilitate the use of these targeting methods. These tools are likely indispensable for some businesses, and despite their risks, they can be deployed safely once the risks are understood.

Some Platform-Provided Targeted Marketing Tools May Implicate Privacy Issues
Google recently announced1 that it will not be deprecating third-party cookies, a reversal from its previous plan to phase out these cookies. “Cookies” are small pieces of code that track users’ activity online. “First-party” cookies often are necessary for the website to function properly. “Third-party” cookies are shared across websites and companies, essentially tracking users’ browsing behaviors to help advertisers target their relevant audiences.

In early 2020, Google announced2 that it would phase out third-party cookies, which are associated with privacy concerns because they track individual web-browsing activity and then share that data with other parties. Google’s 2020 announcement was a response to these concerns.

Fast forward about four and a half years, and Google reversed course. During that time, Google had introduced alternatives to third-party cookies, and companies had developed their own, often extensive, proprietary databases3 of information about their customers. However, none of these methods satisfied the advertising industry. Google then made the decision to keep third-party cookies. To address privacy concerns, Google said it would “introduce a new experience in Chrome that lets people make an informed choice that applies across their web browsing, and they’d be able to adjust that choice at any time.”4

Many large platforms in addition to Google offer targeted advertising services via the use of third-party cookies. Can businesses use these services without any legal ramifications? Does the possibility for consumers to opt out mean that a user cannot be liable for privacy concerns if it relies on third-party cookies? The relevant cases have held that individual businesses still must be careful despite any opt-out and other built-in tools offered by these platforms.

Two recent cases from the Southern District of New York5 held that individual businesses that used “Meta Pixels” to track consumers may be liable for violations of the Video Privacy Protection Act (VPPA). 19 U.S.C. § 2710. Facebook defines a Meta Pixel6 as a “piece of code … that allows you to … make sure your ads are shown to the right people … drive more sales, [and] measure the results of your ads.” In other words, a Meta Pixel is essentially a cookie provided by Meta/Facebook that helps businesses target ads to relevant audiences.

As demonstrated by those two recent cases, businesses cannot rely on a platform’s program to ensure their ad targeting efforts do not violate the law. These violations may expose companies to enormous damages – VPPA cases often are brought as class actions and even a single violation may carry damages in excess of $2,500.

In those New York cases, the consumers had not consented to sharing information, but, even if they had, the consent may not suffice. Internet contracts, often included in a website’s Terms of Service, are notoriously difficult to enforce. For example, in one of those S.D.N.Y. cases, the court found that the arbitration clause to which subscribers had agreed was not effective to force arbitration in lieu of litigation for this matter. In addition, the type of consent and the information that websites need to provide before sharing information can be extensive and complicated, as recently reportedby my colleagues.

Another issue that companies may encounter when relying on widespread cookie offerings is whether the mode (as opposed to the content) of data transfer complies with all relevant privacy laws. For example, the Swedish Data Protection Agency recently found8 that a company had violated the European Union’s General Data Protection Regulation (GDPR) because the method of transfer of data was not compliant. In that case, some of the consumers had consented, but some were never asked for consent.

Some Platform-Provided Targeted Marketing Tools May Implicate False or Misleading Advertising Issues
Another method that businesses use to target their advertising to relevant consumers is to hire social media influencers to endorse their products. These partnerships between brands and influencers can be beneficial to both parties and to the audiences who are guided toward the products they want. These partnerships are also subject to pitfalls, including reputational pitfalls (a controversial statement by the influencer may negatively impact the reputation of the brand) and legal pitfalls.

The Federal Trade Commission (FTC) has issued guidelinesConcerning Use of Endorsements and Testimonials” in advertising, and published a brochure for influencers, “Disclosures 101 for Social Media Influencers,”10 that tells influencers how they must apply the guidelines to avoid liability for false or misleading advertising when they endorse products. A key requirement is that influencers must “make it obvious” when they have a “material connection” with the brand. In other words, the influencer must disclose that it is being paid (or gains other, non-monetary benefits) to make the endorsement.

Many social media platforms make it easy to disclose a material connection between a brand and an influencer – a built-in function allows influencers to simply click a check mark to disclose the existence of a material connection with respect to a particular video endorsement. The platform then displays a hashtag or other notification along with the video that says “#sponsored” or something similar. However, influencers cannot rely on these built-in notifications. The FTC brochure clearly states: “Don’t assume that a platform’s disclosure tool is good enough, but consider using it in addition to your own, good disclosure.”

Brands that sponsor influencer endorsements may easily find themselves on the hook if the influencer does not properly disclose that the influencer and the brand are materially connected. In some cases, the contract between the brand and influencer may pass any risk to the brand. In others, the influencer may be judgement proof, or the brand is an easier target for enforcement. And, unsurprisingly, the FTC has sent warning letters11 threatening high penalties to brands for influencer violations.

The Platform-Provided Tools May Be Deployed Safely
Despite risks involved in some platform-provided tools for targeted marketing, these tools are very useful, and businesses should continue to take advantage of them. However, businesses cannot rely on these widely available and easy-to-use tools but must ensure that their own policies and compliance programs protect them from liability.

The same warning about widely available social media tools and lessons for a business to protect itself are also true about other activities online, such as using platforms’ built-in “reposting” function (which may implicate intellectual property infringement issues) and using out-of-the-box website builders (which may implicate issues under the Americans with Disabilities Act). A good first step for a business to ensure legal compliance online is to understand the risks. An attorney experienced in internet law, privacy law and social media law can help.

_________________________________________________________________________________________________________________

1 https://privacysandbox.com/news/privacy-sandbox-update/

https://blog.chromium.org/2020/01/building-more-private-web-path-towards.html

3 Businesses should ensure that they protect these databases as trade secrets. See my recent Insights at https://www.wilsonelser.com/sarah-fink/publications/relying-on-noncompete-clauses-may-not-be-the-best-defense-of-proprietary-data-when-employees-depart and https://www.wilsonelser.com/sarah-fink/publications/a-practical-approach-to-preserving-proprietary-competitive-data-before-and-after-a-hack

4 https://privacysandbox.com/news/privacy-sandbox-update/

5 Aldana v. GamesStop, Inc., 2024 U.S. Dist. Lexis 29496 (S.D.N.Y. Feb. 21, 2024); Collins v. Pearson Educ., Inc., 2024 U.S. Dist. Lexis 36214 (S.D.N.Y. Mar. 1, 2024)

6 https://www.facebook.com/business/help/742478679120153?id=1205376682832142

7 https://www.wilsonelser.com/jana-s-farmer/publications/new-york-state-attorney-general-issues-guidance-on-privacy-controls-and-web-tracking-technologies

See, e.g., https://www.dataguidance.com/news/sweden-imy-fines-avanza-bank-sek-15m-unlawful-transfer

9 https://www.ecfr.gov/current/title-16/chapter-I/subchapter-B/part-255

10 https://www.ftc.gov/system/files/documents/plain-language/1001a-influencer-guide-508_1.pd

11 https://www.ftc.gov/system/files/ftc_gov/pdf/warning-letter-american-bev.pdf
https://www.ftc.gov/system/files/ftc_gov/pdf/warning-letter-canadian-sugar.pdf

Cybersecurity Crunch: Building Strong Data Security Programs with Limited Resources – Insights from Tech and Financial Services Sectors

In today’s digital age, cybersecurity has become a paramount concern for executives navigating the complexities of their corporate ecosystems. With resources often limited and the ever-present threat of cyberattacks, establishing clear priorities is essential to safeguarding company assets.

Building the right team of security experts is a critical step in this process, ensuring that the organization is well-equipped to fend off potential threats. Equally important is securing buy-in from all stakeholders, as a unified approach to cybersecurity fosters a robust defense mechanism across all levels of the company.Digit

This insider’s look at cybersecurity will delve into the strategic imperatives for companies aiming to protect their digital frontiers effectively.

Where Do You Start on Cybersecurity?
Resources are limited, and pressures on corporate security teams are growing, both from internal stakeholders and outside threats. But resources to do the job aren’t. So how can companies protect themselves in real world environment, where finances, employee time, and other resources are finite?

“You really have to understand what your company is in the business of doing,” Wilson said. “Every business will have different needs. Their risk tolerances will be different.”

“You really have to understand what your company is in the business of doing. Every business will have different needs. Their risk tolerances will be different.”

BRIAN WILSON, CHIEF INFORMATION SECURITY OFFICER, SAS
For example, Tuttle said in the manufacturing sector, digital assets and data have become increasingly important in recent years. The physical product no longer is the end-all, be-all of the company’s success.

For cybersecurity professionals, this new reality leads to challenges and tough choices. Having a perfect cybersecurity system isn’t possible—not for a company doing business in a modern, digital world. Tuttle said, “If we’re going to enable this business to grow, we’re going to have to be forward-thinking.”

That means setting priorities for cybersecurity. Inskeep, who previously worked in cybersecurity for one of the world’s largest financial services institutions, said multi-factor authentication and controlling access is a good starting point, particularly against phishing and ransomware attacks. Also, he said companies need good back-up systems that enable them to recover lost data as well as robust incident response plans.

“Bad things are going to happen,” Wilson said. “You need to have logs and SIEMs to tell a story.”

Tuttle said one challenge in implementing an incident response plan is engaging team members who aren’t on the front lines of cybersecurity. “They need to know how to escalate quickly, because they are likely to be the first ones to see something that isn’t right,” she said. “They need to be thinking, ‘What should I be looking for and what’s my response?’”

“They need to know how to escalate quickly, because they are likely to be the first ones to see something that isn’t right. They need to be thinking, ‘What should I be looking for and what’s my response?’”

LISA TUTTLE, CHIEF INFORMATION SECURITY OFFICER, SPX TECHNOLOGIES
Wilson said tabletop exercises and security awareness training “are a good feedback loop to have to make sure you’re including the right people. They have to know what to do when something bad happens.”

Building a Security Team
Hiring and maintaining good people in a harrowing field can be a challenge. Companies should leverage their external and internal networks to find data privacy and cybersecurity team members.

Wilson said SAS uses an intern program to help ensure they have trained professionals already in-house. He also said a company’s Help Desk can be a good source of talent.

Remote work also allows companies to cast a wider net for hiring employees. The challenge becomes keeping remote workers engaged, and companies should consider how they can make these far-flung team members feel part of the team.

Inskeep said burnout is a problem in the cybersecurity field. “It’s a job that can feel overwhelming sometimes,” he said. “Interacting with people and protecting them from that burnout has become more critical than ever.”

“It’s a job that can feel overwhelming sometimes. Interacting with people and protecting them from that burnout has become more critical than ever.”

TODD INSKEEP, FOUNDER AND CYBERSECURITY ADVISOR, INCOVATE SOLUTIONS
Weighing Levels of Compliance
The first step, Claypoole said, is understanding the compliance obligations the company faces. These obligations include both regulatory requirements (which are tightening) as well as contract terms from customers.

“For a business, that can be scary, because your business may be agreeing to contract terms with customers and they aren’t asking you about the security requirements in those contracts,” Wilson said.

The panel also noted that “compliance” and “security” aren’t the same thing. Compliance is a minimum set of standards that must be met, while security is a more wide-reaching goal.

But company leaders must realize they can’t have a perfect cybersecurity system, even if they could afford it. It’s important to identify priorities—including which operations are the most important to the company and which would be most disruptive if they went offline.

Wilson noted that global privacy regulations are increasing and becoming stricter every year. In addition, federal officials have taken criminal action against CSOs in recent years.

“Everybody’s radar is kind of up,” Tuttle said. The increasingly compliance pressure also means it’s important for cybersecurity teams to work collaboratively with other departments, rather than making key decisions in a vacuum. Inskeep said such decisions need to be carefully documented as well.

“If you get to a place where you are being investigated, you need your own lawyer,” Claypoole said.

“If you get to a place where you are being investigated, you need your own lawyer.”

TED CLAYPOOLE, PARTNER, WOMBLE BOND DICKINSON
Cyberinsurance is another consideration for data privacy teams, but it can help Chief Security Officers make the case for more resources (both financial and work hours). Inskeep said cyberinsurance questions also can help companies identify areas of risks and where they need to prioritize their efforts. Such priorities can change, and he said companies need to have a committee or some other mechanism to regularly review and update cybersecurity priorities.

Wilson said one positive change he’s seen is that top executives now understand the importance of cybersecurity and are more willing to include cybersecurity team members in the up-front decision-making process.

Bringing in Outside Expertise
Consultants and vendors can be helpful to a cybersecurity team, particularly for smaller teams. Companies can move certain functions to third-party consultants, allowing their own teams to focus on core priorities.

“If we don’t have that internal expertise, that’s a situation where we’d call in third-party resources,” Wilson said.

Bringing in outside professionals also can help a company keep up with new trends and new technologies.

Ultimately, a proactive and well-coordinated cybersecurity strategy is indispensable for safeguarding the digital landscape of modern enterprises. With an ever-evolving threat landscape, companies must be agile in their approach and continuously review and update their security measures. At the core of any effective cybersecurity plan is a comprehensive risk management framework that identifies potential vulnerabilities and outlines steps to mitigate their impact. This framework should also include incident response protocols to minimize the damage in case of a cyberattack.

In addition to technology and processes, the human element is crucial in cybersecurity. Employees must be educated on how to spot potential threats, such as phishing emails or suspicious links, and know what steps to take if they encounter them.

Key Takeaways:
What are the biggest risk areas and how do you minimize those risks?
Know your external cyber footprint. This is what attackers see and will target.
Align with your team, your peers, and your executive staff.
Prioritize implementing multi-factor authentication and controlling access to protect against common threats like phishing and ransomware.
Develop reliable backup systems and robust incident response plans to recover lost data and respond quickly to cyber incidents.
Engage team members who are not on the front lines of cybersecurity to ensure quick identification and escalation of potential threats.
Conduct tabletop exercises and security awareness training regularly.
Leverage intern programs and help desk personnel to build a strong cybersecurity team internally.
Explore remote work options to widen the talent pool for hiring cybersecurity professionals, while keeping remote workers engaged and integrated.
Balance regulatory compliance with overall security goals, understanding that compliance is just a minimum standard.

Copyright © 2024 Womble Bond Dickinson (US) LLP All Rights Reserved.

by: Theodore F. Claypoole of Womble Bond Dickinson (US) LLP

For more on Cybersecurity, visit the Communications Media Internet section.

Mandatory Cybersecurity Incident Reporting: The Dawn of a New Era for Businesses

A significant shift in cybersecurity compliance is on the horizon, and businesses need to prepare. Starting in 2024, organizations will face new requirements to report cybersecurity incidents and ransomware payments to the federal government. This change stems from the U.S. Department of Homeland Security’s (DHS) Cybersecurity Infrastructure and Security Agency (CISA) issuing a Notice of Proposed Rulemaking (NPRM) on April 4, 2024. This notice aims to enforce the Cyber Incident Reporting for Critical Infrastructure Act of 2022 (CIRCIA). Essentially, this means that “covered entities” must report specific cyber incidents and ransom payments to CISA within defined timeframes.

Background

Back in March 2022, President Joe Biden signed CIRCIA into law. This was a big step towards improving America’s cybersecurity. The law requires CISA to create and enforce regulations mandating that covered entities report cyber incidents and ransom payments. The goal is to help CISA quickly assist victims, analyze trends across different sectors, and share crucial information with network defenders to prevent other potential attacks.

The proposed rule is open for public comments until July 3, 2024. After this period, CISA has 18 months to finalize the rule, with an expected implementation date around October 4, 2025. The rule should be effective in early 2026. This document provides an overview of the NPRM, highlighting its key points from the detailed Federal Register notice.

Cyber Incident Reporting Initiatives

CIRCIA includes several key requirements for mandatory cyber incident reporting:

  • Cyber Incident Reporting Requirements – CIRCIA mandates that CISA develop regulations requiring covered entities to report any covered cyber incidents within 72 hours from the time the entity reasonably believes the incident occurred.
  • Federal Incident Report Sharing – Any federal entity receiving a report on a cyber incident after the final rule’s effective date must share that report with CISA within 24 hours. CISA will also need to make information received under CIRCIA available to certain federal agencies within the same timeframe.
  • Cyber Incident Reporting Council – The Department of Homeland Security (DHS) must establish and chair an intergovernmental Cyber Incident Reporting Council to coordinate, deconflict, and harmonize federal incident reporting requirements.

Ransomware Initiatives

CIRCIA also authorizes or mandates several initiatives to combat ransomware:

  • Ransom Payment Reporting Requirements – CISA must develop regulations requiring covered entities to report to CISA within 24 hours of making any ransom payments due to a ransomware attack. These reports must be shared with federal agencies similarly to cyber incident reports.
  • Ransomware Vulnerability Warning Pilot Program – CISA must establish a pilot program to identify systems vulnerable to ransomware attacks and may notify the owners of these systems.
  • Joint Ransomware Task Force – CISA has announced the launch of the Joint Ransomware Task Force to build on existing efforts to coordinate a nationwide campaign against ransomware attacks. This task force will work closely with the Federal Bureau of Investigation and the Office of the National Cyber Director.

Scope of Applicability

The regulation targets many “covered entities” within critical infrastructure sectors. CISA clarifies that “covered entities” encompass more than just owners and operators of critical infrastructure systems and assets. Entities actively participating in these sectors might be considered “in the sector,” even if they are not critical infrastructure themselves. Entities uncertain about their status are encouraged to contact CISA.

Critical Infrastructure Sectors

CISA’s interpretation includes entities within one of the 16 sectors defined by Presidential Policy Directive 21 (PPD 21). These sectors include Chemical, Commercial Facilities, Communications, Critical Manufacturing, Dams, Defense Industrial Base, Emergency Services, Energy, Financial Services, Food and Agriculture, Government Facilities, Healthcare and Public Health, Information Technology, Nuclear Reactors, Materials, and Waste, Transportation Systems, Water and Wastewater Systems.

Covered Entities

CISA aims to include small businesses that own and operate critical infrastructure by setting additional sector-based criteria. The proposed rule applies to organizations falling into one of two categories:

  1. Entities operating within critical infrastructure sectors, except small businesses
  2. Entities in critical infrastructure sectors that meet sector-based criteria, even if they are small businesses

Size-Based Criteria

The size-based criteria use Small Business Administration (SBA) standards, which vary by industry and are based on annual revenue and number of employees. Entities in critical infrastructure sectors exceeding these thresholds are “covered entities.” The SBA standards are updated periodically, so organizations must stay informed about the current thresholds applicable to their industry.

Sector-Based Criteria

The sector-based criteria target essential entities within a sector, regardless of size, based on the potential consequences of disruption. The proposed rule outlines specific criteria for nearly all 16 critical infrastructure sectors. For instance, in the information technology sector, the criteria include:

  • Entities providing IT services for the federal government
  • Entities developing, licensing, or maintaining critical software
  • Manufacturers, vendors, or integrators of operational technology hardware or software
  • Entities involved in election-related information and communications technology

In the healthcare and public health sector, the criteria include:

  • Hospitals with 100 or more beds
  • Critical access hospitals
  • Manufacturers of certain drugs or medical devices

Covered Cyber Incidents

Covered entities must report “covered cyber incidents,” which include significant loss of confidentiality, integrity, or availability of an information system, serious impacts on operational system safety and resiliency, disruption of business or industrial operations, and unauthorized access due to third-party service provider compromises or supply chain breaches.

Significant Incidents

This definition covers substantial cyber incidents regardless of their cause, such as third-party compromises, denial-of-service attacks, and vulnerabilities in open-source code. However, threats or activities responding to owner/operator requests are not included. Substantial incidents include encryption of core systems, exploitation causing extended downtime, and ransomware attacks on industrial control systems.

Reporting Requirements

Covered entities must report cyber incidents to CISA within 72 hours of reasonably believing an incident has occurred. Reports must be submitted via a web-based “CIRCIA Incident Reporting Form” on CISA’s website and include extensive details about the incident and ransom payments.

Report Types and Timelines

  • Covered Cyber Incident Reports within 72 hours of identifying an incident
  • Ransom Payment Reports due to a ransomware attack within 24 hours of payment
  • Joint Covered Cyber Incident and Ransom Payment Reports within 72 hours for ransom payment incidents
  • Supplemental Reports within 24 hours if new information or additional payments arise

Entities must retain data used for reports for at least two years. They can authorize a third party to submit reports on their behalf but remain responsible for compliance.

Exemptions for Similar Reporting

Covered entities may be exempt from CIRCIA reporting if they have already reported to another federal agency, provided an agreement exists between CISA and that agency. This agreement must ensure the reporting requirements are substantially similar, and the agency must share information with CISA. Federal agencies that report to CISA under the Federal Information Security Modernization Act (FISMA) are exempt from CIRCIA reporting.

These agreements are still being developed. Entities reporting to other federal agencies should stay informed about their progress to understand how they will impact their reporting obligations under CIRCIA.

Enforcement and Penalties

The CISA director can make a request for information (RFI) if an entity fails to submit a required report. Non-compliance can lead to civil action or court orders, including penalties such as disbarment and restrictions on future government contracts. False statements in reports may result in criminal penalties.

Information Protection

CIRCIA protects reports and RFI responses, including immunity from enforcement actions based solely on report submissions and protections against legal discovery and use in proceedings. Reports are exempt from Freedom of Information Act (FOIA) disclosures, and entities can designate reports as “commercial, financial, and proprietary information.” Information can be shared with federal agencies for cybersecurity purposes or specific threats.

Business Takeaways

Although the rule will not be effective until late 2025, companies should begin preparing now. Entities should review the proposed rule to determine if they qualify as covered entities and understand the reporting requirements, then adjust their security programs and incident response plans accordingly. Creating a regulatory notification chart can help track various incident reporting obligations. Proactive measures and potential formal comments on the proposed rule can aid in compliance once the rules are finalized.

These steps are designed to guide companies in preparing for CIRCIA, though each company must assess its own needs and procedures within its specific operational, business, and regulatory context.

Listen to this post

Bidding Farewell, For Now: Google’s Ad Auction Class Certification Victory

A federal judge in the Northern District of California delivered a blow to a potential class action lawsuit against Google over its ad auction practices. The lawsuit, which allegedly involved tens of millions of Google account holders, claimed Google’s practices in its real-time bidding (RTB) auctions violated users’ privacy rights. But U.S. District Judge Yvonne Gonzalez Rogers declined to certify the class of consumers, pointing to deficiencies in the plaintiffs’ proposed class definition.

According to plaintiffs, Google’s RTB auctions share highly specific personal information about individuals with auction participants, including device identifiers, location data, IP addresses, and unique demographic and biometric data, including age and gender. This, the plaintiffs argued, directly contradicted Google’s promises to protect users’ data. The plaintiffs therefore proposed a class definition that included all Google account holders subject to the company’s U.S. terms of service whose personal information was allegedly sold or shared by Google in its ad auctions after June 28, 2016.

But Google challenged this definition on the basis that it “embed[ded] the concept of personal information” and therefore subsumed a dispositive issue on the merits, i.e., whether Google actually shared account holders’ personal information. Google argued that the definition amounted to a fail-safe class since it would include even uninjured members. The Court agreed. As noted by Judge Gonzalez Rogers, Plaintiffs’ broad class definition included a significant number of potentially uninjured class members, thus warranting the denial of their certification motion.

Google further argued that merely striking the reference to “personal information,” as proposed by plaintiffs, would not fix this problem. While the Court acknowledged this point, it concluded that it did not yet have enough information to make that determination. Because the Court denied plaintiffs’ certification motion with leave to amend, it encouraged the parties to address these concerns in any subsequent rounds of briefing.

In addition, Judge Gonzalez raised that plaintiffs would need to demonstrate that the RTB data produced in the matter thus far was representative of the class as a whole. While the Court agreed with plaintiffs’ argument and supporting evidence that Google “share[d] so much information about named plaintiffs that its RTB data constitute[d] ‘personal information,” Judge Gonzalez was not persuaded by their assertion that the collected RTB data would necessarily also provide common evidence for the rest of the class. The Court thus determined that plaintiffs needed to affirmatively demonstrate through additional evidence that the RTB data was representative of all putative class members, and noted for Google that it could not refuse to provide such and assert that plaintiffs had not met their burden as a result.

This decision underscores the growing complexity of litigating privacy issues in the digital age, and previews new challenges plaintiffs may face in demonstrating commonality and typicality among a proposed class in privacy litigation. The decision is also instructive for modern companies that amass various kinds of data insofar as it demonstrates that seemingly harmless pieces of that data may, in the aggregate, still be traceable to specific persons and thus qualify as personally identifying information mandating compliance with the patchwork of privacy laws throughout the U.S.

Incorporating AI to Address Mental Health Challenges in K-12 Students

The National Institute of Mental Health reported that 16.32% of youth (aged 12-17) in the District of Columbia (DC) experience at least one major depressive episode (MDE).
Although the prevalence of youth with MDE in DC is lower compared to some states, such as Oregon (where it reached 21.13%), it is important to address mental health challenges in youth early, as untreated mental health challenges can persist into adulthood. Further, the number of youths with MDE climbs nationally each year, including last year when it rose by almost 2% to approximately 300,000 youth.

It is important to note that there are programs specifically designed to help and treat youth that have experienced trauma and are living with mental health challenges. In DC, several mental health services and professional counseling services are available to residents. Most importantly, there is a broad reaching school-based mental health program that aims to provide a behavioral health expert in every school building. Additionally, on the DC government’s website, there is a list of mental health services programs available, which can be found here.

In conjunction with the mental health programs, early identification of students at risk for suicide, self-harm, and behavioral issues can help states, including DC, ensure access to mental health care and support for these young individuals. In response to the widespread youth mental health crisis, K-12 schools are employing the use of artificial intelligence (AI)-based tools to identify students at risk for suicide and self-harm. Through AI-based suicide risk monitoring, natural language processing, sentiment analysis, predictive models, early intervention, and surveillance and evaluation, AI is playing a crucial role in addressing the mental challenges faced by youth.

AI systems, developed by companies like Bark, Gaggle, and GoGuardian, aim to monitor students’ digital footprint through various data inputs, such as online interactions and behavioral patterns, for signs of distress or risk. These programs identify students who may be at risk for self-harm or suicide and alert the school and parents accordingly.

Proposals for using AI models to enhance mental health surveillance in school settings by implementing chat boxes to interact with students are being introduced. The chat box conversation logs serve as the source of raw data for the machine learning. According to Using AI for Mental Health Analysis and Prediction in School Surveys, existing survey results evaluated by health experts can be used to create a test dataset to validate the machine learning models. Supervised learning can then be deployed to classify specific behaviors and mental health patterns. However, there are concerns about how these programs work and what safeguards the companies have in place to protect youths’ data from being sold to other platforms. Additionally, there are concerns about whether these companies are complying with relevant laws (e.g., the Family Educational Rights and Privacy Act [FERPA]).

The University of Michigan identified AI technologies, such as natural language processing (NLP) and sentiment analysis, that can analyze user interactions, such as posts and comments, to identify signs of distress, anxiety, or depression. For example, Breathhh is an AI-powered Chrome extension designed to automatically deliver mental health exercises based on an individual’s web activity and online behaviors. By monitoring and analyzing the user’s interactions, the application can determine appropriate moments to present stress-relieving practices and strategies. Applications, like Breathhh, are just one example of personalized interventions designed by monitoring user interaction.

When using AI to address mental health concerns among K-12 students, policy implications must be carefully considered.

First, developers must obtain informed consent from students, parents, guardians, and all stakeholders before deploying such AI models. The use of AI models is always a topic of concern for policymakers because of the privacy concerns that come with it. To safely deploy AI models, there needs to be privacy protection policies in place to safeguard sensitive information from being improperly used. There is no comprehensive legislation that addresses those concerns either nationally or locally.
Second, developers also need to consider and factor in any bias engrained in their algorithm through data testing and regular monitoring of data output before it reaches the user. AI has the ability to detect early signs of mental health challenges. However, without such proper safeguards in place, we risk failing to protect students from being disproportionately impacted. When collected data reflects biases, it can lead to unfair treatment of certain groups. For youth, this can result in feelings of marginalization and adversely affect their mental health.
Effective policy considerations should encourage the use of AI models that will provide interpretable results, and policymakers need to understand how these decisions are made. Policies should outline how schools will respond to alerts generated by the system. A standard of care needs to be universally recognized, whether it be through policy or the companies’ internal safeguards. This standard of care should outline guidelines that address situations in which AI data output conflicts with human judgment.

Responsible AI implementation can enhance student well-being, but it requires careful evaluation to ensure students’ data is protected from potential harm. Moving forward, school leaders, policymakers, and technology developers need to consider the benefits and risks of AI-based mental health monitoring programs. Balancing the intended benefits while mitigating potential harms is crucial for student well-being.

© 2024 ArentFox Schiff LLP
by: David P. GrossoStarshine S. Chun of ArentFox Schiff LLP

For more news on Artificial Intelligence and Mental Health, visit the NLR Communications, Media & Internet section.

Supply Chains are the Next Subject of Cyberattacks

The cyberthreat landscape is evolving as threat actors develop new tactics to keep up with increasingly sophisticated corporate IT environments. In particular, threat actors are increasingly exploiting supply chain vulnerabilities to reach downstream targets.

The effects of supply chain cyberattacks are far-reaching, and can affect downstream organizations. The effects can also last long after the attack was first deployed. According to an Identity Theft Resource Center report, “more than 10 million people were impacted by supply chain attacks targeting 1,743 entities that had access to multiple organizations’ data” in 2022. Based upon an IBM analysis, the cost of a data breach averaged $4.45 million in 2023.

What is a supply chain cyberattack?

Supply chain cyberattacks are a type of cyberattack in which a threat actor targets a business offering third-party services to other companies. The threat actor will then leverage its access to the target to reach and cause damage to the business’s customers. Supply chain cyberattacks may be perpetrated in different ways.

  • Software-Enabled Attack: This occurs when a threat actor uses an existing software vulnerability to compromise the systems and data of organizations running the software containing the vulnerability. For example, Apache Log4j is an open source code used by developers in software to add a function for maintaining records of system activity. In November 2021, there were public reports of a Log4j remote execution code vulnerability that allowed threat actors to infiltrate target software running on outdated Log4j code versions. As a result, threat actors gained access to the systems, networks, and data of many organizations in the public and private sectors that used software containing the vulnerable Log4j version. Although security upgrades (i.e., patches) have since been issued to address the Log4j vulnerability, many software and apps are still running with outdated (i.e., unpatched) versions of Log4j.
  • Software Supply Chain Attack: This is the most common type of supply chain cyberattack, and occurs when a threat actor infiltrates and compromises software with malicious code either before the software is provided to consumers or by deploying malicious software updates masquerading as legitimate patches. All users of the compromised software are affected by this type of attack. For example, Blackbaud, Inc., a software company providing cloud hosting services to for-profit and non-profit entities across multiple industries, was ground zero for a software supply chain cyberattack after a threat actor deployed ransomware in its systems that had downstream effects on Blackbaud’s customers, including 45,000 companies. Similarly in May 2023, Progress Software’s MOVEit file-transfer tool was targeted with a ransomware attack, which allowed threat actors to steal data from customers that used the MOVEit app, including government agencies and businesses worldwide.

Legal and Regulatory Risks

Cyberattacks can often expose personal data to unauthorized access and acquisition by a threat actor. When this occurs, companies’ notification obligations under the data breach laws of jurisdictions in which affected individuals reside are triggered. In general, data breach laws require affected companies to submit notice of the incident to affected individuals and, depending on the facts of the incident and the number of such individuals, also to regulators, the media, and consumer reporting agencies. Companies may also have an obligation to notify their customers, vendors, and other business partners based on their contracts with these parties. These reporting requirements increase the likelihood of follow-up inquiries, and in some cases, investigations by regulators. Reporting a data breach also increases a company’s risk of being targeted with private lawsuits, including class actions and lawsuits initiated by business customers, in which plaintiffs may seek different types of relief including injunctive relief, monetary damages, and civil penalties.

The legal and regulatory risks in the aftermath of a cyberattack can persist long after a company has addressed the immediate issues that caused the incident initially. For example, in the aftermath of the cyberattack, Blackbaud was investigated by multiple government authorities and targeted with private lawsuits. While the private suits remain ongoing, Blackbaud settled with state regulators ($49,500,000), the U.S. Federal Trade Commission, and the U.S. Securities Exchange Commission (SEC) ($3,000,000) in 2023 and 2024, almost four years after it first experienced the cyberattack. Other companies that experienced high-profile cyberattacks have also been targeted with securities class action lawsuits by shareholders, and in at least one instance, regulators have named a company’s Chief Information Security Officer in an enforcement action, underscoring the professional risks cyberattacks pose to corporate security leaders.

What Steps Can Companies Take to Mitigate Risk?

First, threat actors will continue to refine their tactics and techniques. Thus, all organizations must adapt and stay current with all regulations and legislation surrounding cybersecurity. Cybersecurity and Infrastructure Security Agency (CISA) urges developer education for creating secure code and verifying third-party components.

Second, stay proactive. Organizations must re-examine not only their own security practices but also those of their vendors and third-party suppliers. If third and fourth parties have access to an organization’s data, it is imperative to ensure that those parties have good data protection practices.

Third, companies should adopt guidelines for suppliers around data and cybersecurity at the outset of a relationship since it may be difficult to get suppliers to adhere to policies after the contract has been signed. For example, some entities have detailed processes requiring suppliers to inform of attacks and conduct impact assessments after the fact. In addition, some entities expect suppliers to follow specific sequences of steps after a cyberattack. At the same time, some entities may also apply the same threat intelligence that it uses for its own defense to its critical suppliers, and may require suppliers to implement proactive security controls, such as incident response plans, ahead of an attack.

Finally, all companies should strive to minimize threats to their software supply by establishing strong security strategies at the ground level.