How to Develop an Effective Cybersecurity Incident Response Plan for Businesses

Data breaches have become more frequent and costly than ever. In 2021, the average data breach cost companies more than $4 million. Threat actors are increasingly likely to be sophisticated. The emergence of ransomware-as-a-service (RaaS) has allowed even unsophisticated, inexperienced parties to execute harmful, disruptive, costly attacks. In this atmosphere, what can businesses do to best prepare for a cybersecurity incident?

One fundamental aspect of preparation is to develop a cyber incident response plan (IRP). The National Institute of Standards and Technology (NIST) identified five basic cybersecurity functions to manage cybersecurity risk:

  • Identify
  • Protect
  • Detect
  • Respond
  • Recover

In the NIST framework, anticipatory response planning is considered part of the “respond” function, indicating how integral proper planning is to an effective response. Indeed, NIST notes that “investments in planning and exercises support timely response and recovery actions, resulting in reduced impact to the delivery of services.”

But what makes an effective IRP? And what else goes into quality response planning?

A proper IRP requires several considerations. The primary elements include:

  • Assigning accountability: identify an incident response team
  • Securing assistance: identify key external vendors including forensic, legal and insurance
  • Introducing predictability: standardize crucial response, remediation and recovery steps
  • Creating readiness: identify legal obligations and information to facilitate the company’s fulfillment of those obligations
  • Mandating experience: develop periodic training, testing and review requirements

After developing an IRP, a business must ensure it remains current and effective through regular reviews at least annually or anytime the business undergoes a material change that could alter either the IRP’s operation or the cohesion of the incident response team leading those operations.

An effective IRP is one of several integrated tools that can strengthen your business’s data security prior to an attack, facilitate an effective response to any attack, speed your company’s recovery from an attack and help shield it from legal exposure in the event of follow-on litigation.

NIST Releases Risk ‘Profile’ for Generative AI

A year ago, we highlighted the National Institute of Standards and Technology’s (“NIST”) release of a framework designed to address AI risks (the “AI RMF”). We noted how it is abstract, like its central subject, and is expected to evolve and change substantially over time, and how NIST frameworks have a relatively short but significant history that shapes industry standards.

As support for the AI RMF, last month NIST released in draft form the Generative Artificial Intelligence Profile (the “Profile”).The Profile identifies twelve risks posed by Generative AI (“GAI”) including several that are novel or expected to be exacerbated by GAI. Some of the risks are exotic and new, such as confabulation, toxicity, and homogenization.

The Profile also identifies risks that are familiar, such as those for data privacy and cybersecurity. For the latter, the Profile details two types of cybersecurity risks: (1) those with the potential to discover or enable the lowering of barriers for offensive capabilities, and (2) those that can expand the overall attack surface by exploiting vulnerabilities as novel attacks.

For offensive capabilities and novel attack risks, the Profile includes these examples:

  • Large language models (a subset of GAI) that discover vulnerabilities in data and write code to exploit them.
  • GAI-powered co-pilots that proactively inform threat actors on how to evade detection.
  • Prompt-injections that steal data and run code remotely on a machine.
  • Compromised datasets that have been ‘poisoned’ to undermine the integrity of outputs.

In the past, the Federal Trade Commission (“FTC”) has referred to NIST when investigating companies’ data breaches. In settlement agreements, the FTC has required organizations to implement security measures through the NIST Cybersecurity Framework. It is reasonable to assume then, that NIST guidance on GAI will also be recommended or eventually required.

But it’s not all bad news – despite the risks when in the wrong hands, GAI will also improve cybersecurity defenses. As recently noted by Microsoft’s recent report on the GDPR & GAI, GAI can already: (1) support cybersecurity teams and protect organizations from threats, (2) train models to review applications and code for weaknesses, and (3) review and deploy new code more quickly by automating vulnerability detection.

Before ‘using AI to fight AI’ becomes legally required, just as multi-factor authentication, encryption, and training have become legally required for cybersecurity, the Profile should be considered to mitigate GAI risks. From pages 11-52, the Profile examines four hundred ways to use the Profile for GAI risks. Grouping them together, some of the recommendations include:

  • Refine existing incident response plans and risk assessments if acquiring, embedding, incorporating, or using open-source or proprietary GAI systems.
  • Implement regular adversary testing of the GAI, along with regular tabletop exercises with stakeholders and the incident response team to better inform improvements.
  • Carefully review and revise contracts and service level agreements to identify who is liable for a breach and responsible for handling an incident in case one is identified.
  • Document everything throughout the GAI lifecycle, including changes to any third parties’ GAI systems, and where audited data is stored.

“Cybersecurity is the mother of all problems. If you don’t solve it, all the other technology stuff just doesn’t happen” said Charlie Bell, Microsoft’s Chief of Security, in 2022. To that end, the AM RMF and now the Profile provide useful and early guidance on how to manage GAI Risks. The Profile is open for public comment until June 2, 2024.

NIST Releases New Framework for Managing AI and Promoting Trustworthy and Responsible Use and Development

On January 26, 2023, the National Institute of Standards and Technology (“NIST”) released the Artificial Intelligence Risk Management Framework (“AI RMF 1.0”), which provides a set of guidelines for organizations that design, develop, deploy or use AI to manage its many risks and promote trustworthy and responsible use and development of AI systems.

The AI RMF 1.0 provides guidance as to how organizations may evaluate AI risks (e.g., intellectual property, bias, privacy and cybersecurity) and trustworthiness. The AI RMF 1.0 outlines the characteristics of trustworthy AI systems, which are valid, reliable, safe, secure, resilient, accountable, transparent, explainable, interpretable, privacy enhanced and fair with their harmful biases managed. It also describes four high-level functions, with associated actions and outcomes to help organizations better understand and manage AI:

  • The Govern function addresses evaluation of AI technologies’ policies, processes and procedures, including their compliance with legal and regulatory requirements and transparent and trustworthy implementation.
  • The Map function provides context for organizations to frame risks relating to AI systems, including AI system impacts and interdependencies.
  • The Measure function uses quantitative, qualitative or mixed-method tools, techniques and methodologies to analyze, benchmark and monitor AI risk and related impacts, including tracking metrics to determine trustworthy characteristics, social impact and human-AI configurations.
  • The Manage function entails allocating risk resources to mapped and measured risks consistent with the Govern function. The Manage function includes determining how to treat risks and develop plans to respond to, recover from and communicate about incidents and events.

NIST released a draft AI Risk Management Framework Playbook to accompany the AI RMF 1.0. NIST plans to release an updated version of the Playbook in the Spring of 2023 and launch a new Trustworthy and Responsible AI Resource Center to help organizations put AI RMF 1.0 into practice. NIST has also provided a Roadmap of its priorities to advance the AI RMF.

Copyright © 2023, Hunton Andrews Kurth LLP. All Rights Reserved.
For more Technology Legal News, click here to visit the National Law Review.