An incident response plan is only useful if it matches the way a real incident actually unfolds.
That is where many organizations go wrong. They write an incident response plan as if it were a static policy document: a list of roles, some phone numbers, a few vague statements about escalation, and a promise to “follow procedures as appropriate.” But real incidents do not unfold in tidy policy language. They move through stages. Suspicious activity is noticed. Someone reports it. Responders try to determine whether it is real. The organization decides how serious it is. Systems or accounts may need to be contained quickly. The root cause must be removed. Operations must be restored. Then the organization must review what happened and improve.
A good incident response plan is valuable because it covers that whole lifecycle. NIST’s current guidance makes this clear. SP 800-61 Rev. 3, finalized in April 2025, says incident response should be integrated into broader cybersecurity risk management and aligned to the NIST Cybersecurity Framework 2.0. NIST explains that all six CSF 2.0 Functions—Govern, Identify, Protect, Detect, Respond, and Recover—play vital roles in incident response. CISA likewise describes an incident response plan as something that should be implemented before, during, and after a cybersecurity incident.
So the best way to explain how to build an incident response plan is to walk through the actual steps of incident response and show how the plan supports each one.
Incident response starts before anything happens
The first step of incident response is preparation.
That can sound obvious, but it is one of the most overlooked truths in cybersecurity. Most organizations think incident response begins when an alert appears on a dashboard or a user reports a suspicious email. In reality, the response begins much earlier. It begins when the company decides what an incident is, who has authority to declare one, how severity levels work, who gets called after hours, what systems are most critical, how evidence is preserved, who can approve disruptive actions, and how recovery priorities are set. NIST emphasizes exactly this idea by tying incident response to the broader risk-management structure of CSF 2.0 rather than treating it as a narrow technical activity.
This means the plan must be built to support the entire incident lifecycle, not just the middle of it.
If an organization is struck by ransomware, for example, the response will go much more smoothly if the plan already answers questions like these: Who can authorize isolation of a production server? Who contacts outside counsel? Who calls the cyber insurer? Who approves engaging an outside forensics firm? Which business systems must be restored first? What alternate communication channel is used if email is down? If none of that has been decided beforehand, the incident response team will spend the first hour arguing instead of acting.
So when we ask what goes into an incident response plan, the first answer is simple: everything the organization will need later, before panic sets in.
The plan must support detection and reporting
The next stage is detection and reporting. This is where suspicious activity is first noticed.
That notice may come from a person or from a tool. A user may report a phishing email. A finance employee may question a suspicious wire request. An endpoint detection platform may flag ransomware behavior. A cloud administrator may see impossible-travel logins or unauthorized privilege changes. A help desk technician may notice that multiple employees are locked out or that files are suddenly inaccessible.
This stage is where the incident response plan starts becoming operational.
A strong plan should define how events are reported, who receives them, how they are triaged, and how quickly they are escalated. CISA’s guidance on incident response planning and cyber incident handling emphasizes that organizations should act through their response processes when signs of compromise appear, rather than waiting for every detail to be confirmed.
In practical terms, the plan should answer questions such as these. If an employee receives a suspicious email, what exactly should they do? Use a phishing-report button? Forward it to a security mailbox? Call the help desk? If the SOC sees a high-confidence alert at 2:00 a.m., who gets paged first? If the issue may involve a privileged account, when is the incident lead notified? If email is affected, what alternate method is used for escalation?
A weak plan says, “Report incidents to IT.” A strong plan says, “Users report suspicious emails through the phishing-report feature or to the security mailbox. High-confidence alerts involving privileged access, ransomware behavior, widespread malware, or regulated-data exposure are escalated immediately to the on-call security lead, who notifies the incident response lead and designated backup according to the severity matrix.”
That level of clarity matters. Imagine an employee clicks a fake Microsoft 365 password-reset email. Ten minutes later, the account is generating suspicious logins and creating mailbox forwarding rules. If the user has a clear reporting path and the security team has a defined escalation process, the issue may be identified and contained quickly. If reporting is informal and escalation is unclear, the attacker gains time.
This is why the reporting and escalation section of the plan directly supports the detection phase of real incident response.
The plan must support triage and analysis
Once an event is reported, responders have to determine what it really is. This is the triage and analysis phase.
This is where the organization asks several critical questions. Is the event real? Is it malicious? Does it qualify as an incident? How serious is it? What systems, users, or data may be affected? Is the issue contained, or is it still spreading?
The incident response plan should support this phase in several specific ways.
First, it should define what counts as an incident. Not every security event is an incident. A blocked phishing email may be a normal event. A successful credential theft followed by suspicious login activity may be an incident. A confirmed compromise of an administrator account may be a major incident. The plan should make those distinctions clear.
Second, it should define severity levels. Severity is not just an administrative label. It drives who is notified, how urgently actions are taken, whether executives are informed, whether legal is engaged, whether outside experts are called, and how communications are handled. NIST’s incident response guidance frames response activities within structured governance and risk management for exactly this reason.
Third, the plan should identify who performs analysis. Does the security team lead all triage? Does IT gather endpoint or server evidence? Do cloud administrators assist with SaaS and identity activity? Does legal join immediately if sensitive data may be involved?
Consider two examples. In the first, a workstation triggers an antivirus alert on a quarantined test file, and no other suspicious behavior appears. That may remain a low-level security event. In the second, the same alert is accompanied by PowerShell abuse, creation of a new local administrator account, suspicious outbound connections, and similar activity on several hosts. That may be a high-severity incident. The plan should help responders distinguish between those cases.
This is why the definitions, classification, and roles sections of the plan matter so much. They are not just administrative content. They are what guide the organization through the analysis phase when the facts are still emerging.
The plan must support containment
Once responders conclude that a real incident is underway, the next task is containment.
Containment means stopping the damage from spreading or limiting its impact. Depending on the situation, that may involve disabling a compromised account, revoking active sessions, isolating a workstation, blocking malicious domains, removing mailbox forwarding rules, segmenting a server, suspending VPN access, or restricting certain administrative functions.
This is often where cybersecurity collides directly with business operations.
A containment action may be technically obvious but operationally painful. Security may want to disconnect a production server immediately. IT may need a few minutes to preserve logs or perform a safe failover. Operations may need to stop a process in an orderly way. Leadership may need to approve a disruptive decision affecting business continuity.
That is why the incident response plan must say in advance who can authorize what.
A strong plan should identify which containment actions the security team can take immediately, which actions require additional approval, how business stakeholders are involved for disruptive decisions, and how evidence is preserved before major changes are made.
For example, if ransomware is spreading through file shares, the plan might authorize security and infrastructure teams to isolate affected systems immediately while requiring prompt executive and business notification if a critical production system must be taken offline. In a business email compromise case, the plan may permit immediate disabling of the affected mailbox, revocation of tokens, and forced password reset without waiting for a management committee to debate the matter.
This is where scenario-specific playbooks become especially useful. The core incident response plan should explain the containment decision process broadly. Playbooks should then give structured first actions for common scenarios. CISA’s cyber incident response playbooks reflect that same principle: standardized response sequences improve consistency and speed.
A ransomware playbook may include isolating infected hosts, restricting access to shared storage, validating backup status, and notifying leadership. A business email compromise playbook may include disabling suspicious forwarding rules, revoking sessions, reviewing sent mail, checking for MFA changes, and identifying possible payment fraud.
So the plan supports containment by defining authority, process, and coordination before the organization is forced to make those decisions under stress.
The plan must support eradication
Containment is not the end of the response. It only stops the damage from expanding. The next step is eradication.
Eradication means removing the cause of the incident and eliminating the attacker’s foothold. That may involve deleting malware, removing persistence, closing an exposed remote access path, rotating compromised credentials, patching exploited systems, removing malicious inbox rules, revoking malicious OAuth grants, or correcting a dangerous cloud misconfiguration.
The incident response plan supports this phase by defining who owns remediation work, what evidence must be preserved before cleanup, how changes are documented, and how responders confirm the threat has actually been removed.
This is one of the most important distinctions in all of incident response. Many organizations contain a problem and then behave as though the incident is over. They disable a compromised account but do not realize the attacker created additional persistence elsewhere. They wipe a laptop without preserving the logs that would have shown whether the compromise spread. They restore a server without fixing the vulnerable pathway that allowed the intrusion.
A Microsoft 365 compromise is a good example. Eradication may require more than changing a password. It may also require revoking sessions, reviewing role assignments, checking for malicious app consent, removing forwarding rules, investigating related service accounts, and examining audit logs for wider abuse. The plan should not need to include every technical command, but it should say who does this work and how it is tracked.
This is also where evidence handling becomes crucial. NIST’s incident-response and related forensic guidance stress the importance of preserving information needed for investigation and follow-up action. If responders rush into cleanup without preserving logs or system evidence, they may destroy the information needed to understand scope, support legal analysis, justify insurance claims, or prepare a defensible post-incident account.
So the plan supports eradication by helping the organization remove the threat without losing the truth about what happened.
The plan must support recovery
After containment and eradication comes recovery.
Recovery is not just “turning things back on.” It is the controlled restoration of systems and operations in a way that reduces the risk of reinfection, preserves business priorities, and confirms that the threat has been removed. NIST’s current guidance explicitly ties recovery to the wider incident response lifecycle, and CSF 2.0 treats Recover as a core Function alongside Respond. (NIST Publications)
The incident response plan should therefore answer several recovery questions. Which systems are restored first? Who validates that restored systems are safe to reconnect? Who approves return to production? What monitoring will be increased after restoration? What communications go to users, leadership, partners, or customers? What signs would cause a system to be pulled back offline?
This is where business input matters greatly. Security and IT may know how to restore systems technically, but business owners know which systems matter most operationally. If ransomware affects file shares, ERP, payroll, and an internal archive, the easiest system to restore may not be the most important one. A strong plan ensures recovery follows business priorities, not merely technical convenience.
A business email compromise case can illustrate this too. Recovery is not just re-enabling the mailbox. It may also require confirming that fraudulent messages are no longer active, restoring trust in the account, informing affected contacts, and making sure payment or approval workflows are not still at risk.
That is why recovery planning belongs inside the incident response plan. It is part of the incident lifecycle, not a separate afterthought.
The plan must support communications throughout the lifecycle
Communications do not happen only once in an incident. They happen throughout.
During detection, employees need to know how to report suspicious activity. During triage, executives may need an early status update. During containment, users may need instructions not to reconnect devices or approve unexpected MFA prompts. During recovery, they may need restoration updates and operational guidance. If the incident involves exposure of sensitive data or major business disruption, customers, regulators, partners, insurers, or law enforcement may eventually need to be engaged.
That means the communications section of the incident response plan is not separate from the lifecycle. It overlays all of it.
A strong plan should specify who can communicate internally and externally, when legal review is required, what templates exist for common messages, what alternate channels are used if primary systems are unavailable, and how often major stakeholders are updated during serious incidents. CISA’s guidance on incident response planning and tabletop exercises reinforces the importance of defined roles, communication paths, and coordinated response.
For example, after a suspected credential-theft campaign, an employee advisory might explain that the company is responding to suspicious email activity, some users will be required to reset passwords, and unexpected MFA prompts should be denied and reported. During a ransomware response, executives may need short recurring updates summarizing what is known, what remains uncertain, what containment actions are underway, and which business decisions may be needed next.
The point is simple: if communications are not planned, confusion will fill the gap.
The plan must support post-incident review and improvement
The final stage of incident response is review and improvement.
Once the immediate crisis is over, the organization should examine what happened, how the response worked, what slowed it down, what failed, and what needs to change. This is not optional if the organization wants to improve.
The incident response plan should require a formal lessons-learned review for major incidents and major exercises. CISA’s tabletop exercise packages are built for exactly this purpose: helping organizations assess roles, procedures, communications, and coordination before and after real events.
A good review should ask hard questions. Was detection fast enough? Was the reporting path clear? Was the severity classified correctly? Was executive notification timely? Were containment authorities clear? Were logs available? Did recovery order match business priorities? Were communications accurate and disciplined?
The answers should lead to actual plan changes. Not vague promises. Real updates.
For example, the review may show that the after-hours call tree was outdated, cloud logs were not retained long enough, employees did not know how to report suspicious messages, legal was engaged too late, or the recovery sequence focused on less important systems first. Each of those findings should feed directly back into the plan.
So the incident response plan does not just guide the lifecycle. Each real incident should improve the plan for the next one.
The incident response lifecycle mapped to the incident response plan
Looking at the plan through the lifecycle makes its contents much easier to understand.
Preparation is covered by the purpose, scope, definitions, roles, severity criteria, contact lists, communications framework, legal coordination model, and recovery priorities.
Detection and reporting are covered by the reporting process, alert-routing instructions, on-call escalation paths, employee reporting guidance, and alternate communications.
Triage and analysis are covered by incident definitions, severity levels, decision authority, technical investigation roles, and escalation criteria.
Containment is covered by decision authority for disruptive actions, scenario playbooks, communications procedures, business coordination, and evidence-preservation steps.
Eradication is covered by remediation ownership, evidence handling, forensic preservation, technical playbooks, and validation requirements.
Recovery is covered by restoration priorities, business input, system revalidation, communications, and heightened monitoring.
Lessons learned are covered by after-action review requirements, corrective-action tracking, exercise cadence, and plan update procedures.
Seen this way, the plan is not a random list of sections. It is the framework that supports every stage of a real incident.
What should actually go into the plan
A good incident response plan should usually include the following sections:
Purpose and scope.
Definitions and incident criteria.
Severity classification model.
Incident response team roles and backups.
Reporting and escalation paths.
Communications procedures and approvals.
Evidence preservation and documentation requirements.
Containment, eradication, and recovery decision framework.
Scenario-specific playbooks.
Legal, privacy, regulatory, insurance, and contractual coordination.
External contacts and third-party escalation details.
Post-incident review process.
Review and update schedule.
The exact format will vary, but those topics should be there in one form or another.
Who should handle the plan
The plan should generally be owned by the security function, even though IT, legal, leadership, HR, communications, and business units all have roles in using it.
In a larger organization, ownership often sits with the CISO, incident response manager, or security operations leader. In a smaller organization, it may sit with the IT/security manager, vCISO, or another accountable security lead. Ownership means maintaining the plan, confirming contacts, coordinating exercises, updating playbooks, and making sure the document still reflects the environment.
That does not mean one person handles every incident. It means one role is responsible for making sure the plan is real.
How often to update it
At minimum, the plan should be reviewed annually. But that is just the baseline.
It should also be updated after significant incidents, tabletop exercises, major architectural changes, cloud migrations, mergers, leadership changes, major vendor changes, and regulatory or contractual shifts. NIST’s current guidance treats incident response as part of ongoing risk management, which means the plan has to evolve as the organization evolves.
Certain parts should be reviewed more frequently. Contact lists may need quarterly verification. On-call rotations may need regular updates. Scenario playbooks may need revision whenever core platforms change.
A plan that still assumes the company runs on-premises email when it has moved fully to Microsoft 365 is already outdated, regardless of the date at the top.
A practical checklist for businesses
A business can usually tell whether its incident response plan is real by asking a few practical questions.
Can employees clearly report suspicious activity?
Do we know who declares an incident and who classifies severity?
Do we have named primary and backup contacts for security, IT, legal, communications, leadership, and critical business functions?
Do we know which containment actions can happen immediately and which require additional approval?
Do we have an alternate communication method if email or collaboration tools are affected?
Do we know what evidence must be preserved before cleanup?
Do we have playbooks for our most likely incidents, such as ransomware, phishing, business email compromise, cloud account takeover, insider misuse, and data exposure?
Do we know which systems and processes must be restored first?
Do we know who contacts outside counsel, cyber insurance, and external incident responders?
Have we tested the plan recently in a tabletop exercise?
Do we actually update the plan after incidents and exercises?
If the answer to several of those questions is no, the organization may have a document, but it does not yet have a strong incident response capability.
Final thought
An incident response plan matters because incidents move fast, uncertainty is high, and bad decisions made in the first hour can haunt the organization for months.
The plan should match the real lifecycle of an incident. It should prepare the organization before trouble appears, guide detection and reporting when it does, structure triage when facts are still unclear, support containment when time is short, preserve evidence during eradication, coordinate recovery around business priorities, and force lessons learned after the crisis ends.
That is what a real incident response plan does. It turns cybersecurity from improvisation into organized action.
To make the incident response lifecycle more concrete, below is a sample Incident Response Plan showing how a business might document roles, severity levels, reporting paths, containment authority, recovery priorities, and lessons learned. This sample is not meant to be copied blindly, it is not a cut and paste document. It should be adapted to the organization’s size, legal obligations, technical environment, and operational needs.
Sample Incident Response Plan
Incident Response Plan Overview
This Incident Response Plan establishes the process for identifying, reporting, assessing, containing, eradicating, recovering from, and reviewing cybersecurity incidents affecting the organization. Its purpose is to reduce harm, protect business operations, preserve evidence, support decision-making, and restore normal operations as safely and quickly as possible.
This plan applies to all employees, contractors, managed service providers, business units, systems, cloud services, endpoints, servers, applications, and data owned, operated, or used by the organization.
This plan should be used whenever the organization suspects or confirms unauthorized access, malicious activity, service disruption, data exposure, account compromise, malware, ransomware, insider misuse, or any other cybersecurity event that may negatively affect confidentiality, integrity, or availability.
1. Purpose
The purpose of this plan is to ensure the organization responds to cybersecurity incidents in a structured, repeatable, and coordinated manner. The plan is intended to:
- protect people, systems, and data
- reduce operational disruption
- contain incidents before they spread
- preserve evidence for investigation and legal review
- coordinate internal and external communications
- support safe and orderly recovery
- improve future readiness through lessons learned
2. Scope
This plan applies to:
- all company-owned devices and systems
- all cloud and SaaS environments
- all employee and contractor accounts used for company business
- all company data, whether on-premises or cloud-hosted
- all business units and departments
- all third-party providers handling company systems or data
This plan includes incidents involving:
- phishing and business email compromise
- malware and ransomware
- stolen or compromised credentials
- unauthorized access
- suspicious privileged activity
- insider misuse or data theft
- cloud account compromise
- denial of service or major service outages caused by malicious activity
- exposure of sensitive, regulated, or confidential data
3. Definitions
Security Event: Any observable occurrence related to company systems or data that may be relevant to security. Not every event is an incident.
Security Incident: A confirmed or strongly suspected event that threatens the confidentiality, integrity, or availability of systems or data.
Major Incident: A high-impact incident involving significant business disruption, widespread compromise, privileged access abuse, ransomware, material data exposure, or likely legal, regulatory, or public consequences.
Containment: Actions taken to limit the spread or impact of an incident.
Eradication: Actions taken to remove the cause of the incident and eliminate attacker presence or malicious artifacts.
Recovery: Actions taken to restore systems and business operations safely after containment and eradication.
Evidence Preservation: Steps taken to protect logs, forensic data, system images, cloud audit records, and other information needed for investigation or legal review.
4. Incident Severity Levels
Severity 1 – Low
A limited event with little or no business impact and no evidence of broader compromise.
Examples:
- a blocked phishing email with no user interaction
- malware detected and quarantined on a single system with no persistence
- a failed login attempt with no confirmed compromise
Severity 2 – Medium
A confirmed or likely incident affecting one or more users or systems, but currently limited in scope.
Examples:
- one compromised user mailbox
- malware infection on one workstation
- unauthorized login to a non-privileged account
- suspicious SaaS activity with no confirmed data loss
Severity 3 – High
A serious incident with significant operational risk, likely lateral movement, privileged account involvement, or potential sensitive-data exposure.
Examples:
- compromised administrator account
- malware spreading across multiple hosts
- suspicious access to sensitive file shares
- likely data exfiltration
- cloud tenant abuse affecting multiple services
Severity 4 – Critical
A severe incident causing major business disruption, broad compromise, ransomware, confirmed large-scale data exposure, or executive-level crisis conditions.
Examples:
- active ransomware affecting production systems
- widespread outage caused by malicious activity
- confirmed compromise of core identity systems
- major exposure of regulated or confidential data
- destructive attack or significant operational shutdown
5. Incident Response Team
Incident Response Lead
Responsible for coordinating the overall response, assigning tasks, leading incident meetings, documenting decisions, and ensuring appropriate escalation.
Security Lead
Responsible for technical investigation, alert triage, log analysis, evidence preservation guidance, and recommendations for containment and eradication.
IT Operations Lead
Responsible for implementing system changes, isolating devices, disabling accounts, restoring systems, and assisting with recovery.
Cloud / SaaS Administrator
Responsible for reviewing cloud logs, revoking sessions, validating configurations, and supporting containment in Microsoft 365, Google Workspace, AWS, Azure, or other platforms.
Legal / Compliance Contact
Responsible for reviewing legal risk, notification requirements, contractual obligations, evidence considerations, and outside counsel engagement if needed.
Executive Sponsor
Responsible for major business decisions, emergency spending approval, outside vendor engagement approval, and executive communications.
Communications Lead
Responsible for internal messaging, external messaging, media coordination if needed, and review of employee or customer notices.
HR Representative
Responsible for support in incidents involving employee misuse, policy violations, terminations, or disciplinary matters.
Business Unit Representative
Responsible for advising on operational impact, business priorities, and restoration order.
Each role should have a named primary and backup.
6. Reporting Procedures
All employees must report suspicious cybersecurity activity immediately.
Examples of reportable events include:
- suspicious emails or links
- unexpected MFA prompts
- password-reset notifications not initiated by the user
- missing files or locked files
- unusual account activity
- unauthorized transactions or invoice requests
- unexplained system slowdowns or pop-ups
- suspected data loss or misuse
- unexpected administrative changes
Reports should be made through the approved internal channels:
- phishing-report tool or security mailbox
- help desk ticket marked as urgent security issue
- security hotline or on-call number for after-hours issues
If email is unavailable or believed to be affected, users should report incidents through the alternate communication process identified by the organization.
7. Incident Response Lifecycle
7.1 Detection and Initial Reporting
When suspicious activity is detected, the receiving team must:
- document the time and source of the report
- collect basic details about the issue
- notify the security function or designated incident triage team
- preserve any immediately available evidence
- avoid altering the affected system unless necessary for safety or containment
Example: If a user reports a suspicious Microsoft 365 login alert, the help desk should not dismiss it as a password issue. The event should be escalated for security review immediately.
7.2 Triage and Analysis
The security team or designated responders will:
- review logs, alerts, screenshots, and user reports
- determine whether the activity appears malicious
- identify affected users, systems, accounts, or data
- assign a severity level
- decide whether the issue qualifies as an incident
- escalate to the Incident Response Lead if required
Example: A single phishing email that was not clicked may remain an event. A clicked phishing message followed by mailbox rule creation and suspicious sign-ins is likely a true incident.
7.3 Containment
Containment actions are taken to limit damage and stop spread.
Possible containment actions include:
- isolating a workstation or server
- disabling a compromised user or admin account
- revoking active sessions or tokens
- blocking malicious IPs, domains, or email senders
- removing malicious mailbox rules
- suspending VPN access
- segmenting affected systems from the network
Containment decisions should consider business impact, evidence preservation, and the possibility of attacker persistence.
Example: If ransomware is detected on a file server, affected hosts may need immediate isolation, but responders should preserve evidence where feasible and notify operations if critical workflows will be interrupted.
7.4 Eradication
After containment, responders remove the root cause of compromise.
Possible eradication actions include:
- deleting malware or malicious scripts
- removing persistence mechanisms
- patching exploited systems
- correcting unsafe configurations
- rotating compromised passwords and secrets
- revoking unauthorized application consent
- rebuilding compromised systems from trusted images
Example: In a business email compromise, eradication may involve more than resetting the user password. It may also require token revocation, removal of forwarding rules, review of delegated access, review of inbox rules, and confirmation that no malicious app permissions remain.
7.5 Recovery
Recovery restores systems and operations safely.
Recovery steps may include:
- rebuilding systems
- restoring files from clean backups
- validating the integrity of restored systems
- reconnecting systems only after approval
- increasing logging and monitoring
- communicating restoration guidance to users
- prioritizing restoration based on business impact
Example: If multiple systems are affected, payroll, ERP, customer service, or manufacturing systems may need restoration before lower-priority archive or test systems.
7.6 Lessons Learned
After the incident is stabilized, the organization will conduct an after-action review.
The review should address:
- how the incident was detected
- whether reporting worked properly
- whether severity was classified correctly
- whether escalation was timely
- whether containment actions were appropriate
- whether evidence was preserved
- whether communications were clear
- whether recovery followed business priorities
- what improvements are required
Corrective actions must be assigned owners and due dates.
8. Decision Authority
The plan should clearly state who can approve which actions.
Security / Incident Response Lead may:
- declare an incident
- assign severity
- initiate technical investigation
- recommend containment measures
- activate the incident response team
IT Operations Lead may:
- isolate endpoints or servers
- disable accounts upon authorization or according to preapproved authority
- perform restoration tasks
- implement approved technical changes
Executive Sponsor may:
- approve major business disruption actions
- approve emergency vendor engagement
- approve major communications actions
- approve activation of business continuity processes
Legal / Compliance may:
- direct legal review of notifications
- advise on breach obligations
- recommend outside counsel engagement
- advise on law enforcement contact
9. Communications Procedures
All communications during an incident must be coordinated.
Internal Communications
Internal messaging should be limited to those with a need to know. Employees should receive clear, practical instructions without speculation.
Examples:
- do not click similar emails
- do not reconnect isolated systems
- expect password resets
- report any suspicious MFA prompts
- use alternate communication channels if needed
Executive Communications
Executives should receive concise situation updates including:
- what is known
- what is not yet known
- current severity
- systems or users affected
- actions underway
- business decisions needed
External Communications
Only authorized personnel may communicate with:
- customers
- partners
- media
- regulators
- law enforcement
- insurers
- outside service providers
All external communications should be coordinated with legal and executive leadership where appropriate.
10. Evidence Preservation
Responders must preserve relevant evidence whenever feasible before major remediation actions are taken.
Evidence may include:
- endpoint logs
- server logs
- firewall logs
- cloud audit records
- identity-provider logs
- email headers and mail traces
- screenshots
- memory captures where appropriate
- system images or snapshots
- suspicious files or scripts
Responders should document:
- who collected the evidence
- when it was collected
- where it was stored
- who had access to it
Affected systems should not be wiped, rebuilt, or reimaged until evidence-preservation needs have been considered.
11. External Contacts
The plan should maintain a current list of:
- cyber insurance carrier
- outside counsel
- third-party incident response firm
- managed security provider
- cloud-service provider contacts
- SaaS vendor security contacts
- internet service provider
- relevant law enforcement contacts if applicable
These contacts should be stored in a location accessible even if normal corporate systems are unavailable.
12. Common Incident Playbooks
The organization should maintain playbooks for likely incident types, including:
- phishing
- business email compromise
- ransomware
- malware outbreak
- compromised privileged account
- cloud or SaaS compromise
- insider misuse
- suspected data exfiltration
- lost or stolen device
- third-party breach affecting company systems or data
Each playbook should provide first actions, investigation steps, containment guidance, communications triggers, and recovery considerations.
13. Review and Testing
This Incident Response Plan must be:
- reviewed at least annually
- reviewed after any major incident
- reviewed after significant tabletop exercises
- updated after major environment or business changes
- tested periodically through tabletop exercises or simulations
Contact lists should be verified on a regular schedule.
14. Sample First-Hour Response Checklist
When a likely incident is identified, responders should:
- Record the time the issue was reported or detected.
- Notify the security team and Incident Response Lead.
- Determine whether the event is a confirmed or likely incident.
- Assign a preliminary severity level.
- Preserve immediately available evidence.
- Identify obviously affected users, systems, or accounts.
- Take approved containment actions if urgent.
- Notify required internal stakeholders based on severity.
- Begin an incident log documenting actions and decisions.
- Schedule a response coordination call if needed.
This first-hour checklist is especially useful because many organizations lose time at the start of an incident simply deciding who is doing what.
15. Approval and Ownership
This plan is owned by: [Security Lead / CISO / IT Security Manager]
Executive sponsor: [CIO / COO / CISO / CEO designee]
Effective date: [Insert Date]
Last reviewed: [Insert Date]
Next review date: [Insert Date]