Skip to main content

Assume Compromise: Designing for Continuity and Detection

Matt Lawrence,

Director of Cyber Security Operations

Matt Lawrence,

Director of Cyber Security Operations

“Assume they’ll get in. Design for what happens next.”

This blunt advice came from major UK retailers Marks & Spencer and the Co-op after suffering costly cyber incidents.

No matter how strong your defences are, you should operate under the assumption that attackers will breach your systems eventually. The goal for security leaders is to minimise the damage and keep the business running when that happens.

In the current threat landscape, prevention alone is not enough. It’s time to shift our mindset to cyber resilience: building systems that can detect intruders quickly, contain their impact, and maintain critical operations even under attack.

The Shift from Prevention to Resilience

Organisations once focused heavily on keeping attackers out with strong perimeter defences. But today, breaches often occur through stolen credentials or social engineering—making intrusion almost inevitable.

“This shift has led to an “assume breach” mindset: instead of betting everything on stopping intrusions, we plan for rapid detection and response on the assumption a breach will occur.”

Prevention still matters, but detection and response are equally critical. That means integrating monitoring, alerting, and response playbooks into your core security programme. It also means designing IT and business processes such that if one part of your network is compromised, it can be isolated while the rest of the business continues to operate.

Detection and Continuity as Core Functions

A useful guide for this resilience-oriented approach is the NIST Cybersecurity Framework (CSF).

After the traditional “Protect” comes “Detect, Respond, and Recover” these functions explicitly focus on finding intrusions fast, containing them, and keeping the business running.

The framework emphasises continuous monitoring to spot anomalies and verify security measures. It also stresses having robust incident response and recovery plans to restore operations quickly and maintain business continuity after an incident.

The latest NIST CSF guidance highlights enhanced cyber resilience as a key benefit, noting it “improves detection, response, and recovery from incidents, supporting business continuity”.

For a security leader, aligning with NIST means making sure you have capabilities in place to detect malicious activity in real-time, respond decisively, and recover critical systems or data. Detection might include a 24/7 SOC with SIEM/SOAR, user behaviour analytics, and threat hunting. Response involves predefined incident handling procedures and teams ready to act. Recovery covers backup strategies, disaster recovery sites, and business continuity plans so you can continue serving customers even if primary systems go down.

The bottom line: NIST CSF reinforces that detecting and recovering from breaches is just as vital as preventing them.

Modern Attack Tactics

Identity is the new perimeter, and attackers target it aggressively. Groups like Scattered Spider exemplify this trend: they have repeatedly breached companies by impersonating users and tricking IT helpdesks into resetting passwords or disabling MFA.

With a convincing phone call, an attacker can reset a privileged account’s credentials and log in as a legitimate user, essentially walking in through an unlocked door. It’s no surprise that valid credentials are often more valuable to attackers than malware now.

Once inside, attackers aim to move laterally pivoting across multiple systems and accounts using stolen credentials and admin tools. Techniques like MITRE’s “Valid Accounts” involve using legitimate passwords and remote access tools (e.g. RDP, TeamViewer, AnyDesk) to stay hidden. Because they use familiar IT tools, these actions often evade detection. Consider how these tactics map to MITRE ATT&CK in the context of an identity-driven breach:

Initial Access via Social Engineering:

In the Scattered Spider incidents, attackers called service desk staff (vishing) convincing them to issue password resets to attacker-controlled accounts. MITRE categorises this as voice phishing (T1566.004 – Spear phishing via Telephone). By exploiting human trust, the attackers obtained valid login credentials without cracking a single password themselves.

Privilege Abuse and Lateral Movement:

Armed with credentials (or even stolen password hashes from Active Directory databases like NTDS.dit), attackers could access additional systems and accounts. In one case, attackers stole an AD database and cracked employee passwords. This aligns with MITRE techniques T1078 (Valid Accounts) and T1110 (Brute Force Password Cracking).

With those accounts, they moved through the network. Scattered Spider is known to install multiple remote admin tools and use built-in services (MITRE T1021 – Remote Services) to reach other machines and persist. In other words, once they had a beachhead, they spread laterally by logging in like any other user or admin, which is much harder to spot than malware.

Evasion of MFA and Monitoring:

Attackers have found ways to defeat strong authentication. Techniques like Adversary-in-the-Middle (T1557) allow them to hijack session cookies bypassing MFA entirely. Some phishing frameworks will downgrade MFA or trick users into using less secure methods.

In helpdesk scams, attackers convince staff to re-enrol a new MFA device or turn off MFA (MITRE has technique T1556.006 – Disable or Modify MFA for this). The result: an attacker can impersonate a legitimate user with a valid session token or newly enrolled device, making the intrusion nearly invisible to traditional defences.

The implication for defenders is clear:
Identity compromise and lateral movement are the norm in modern breaches.

Attackers have found ways to defeat strong authentication. Techniques like Adversary-in-the-Middle (T1557) allow them to hijack session cookies bypassing MFA entirely. Some phishing frameworks will downgrade MFA or trick users into using less secure methods.

In helpdesk scams, attackers convince staff to re-enrol a new MFA device or turn off MFA (MITRE has technique T1556.006 – Disable or Modify MFA for this). The result: an attacker can impersonate a legitimate user with a valid session token or newly enrolled device, making the intrusion nearly invisible to traditional defences.

This is why “assume compromise” must inform architecture and monitoring. You can’t just watch the front gate; you need to watch what’s happening inside too.

Lessons from Real Breaches

In April–May 2025, Scattered Spider targeted UK retailers including M&S and Co-op. M&S faced a £300M ransomware hit, while Co-op limited damage through swift response—highlighting the value of resilience.

Marks & Spencer (M&S):

Attackers allegedly gained access via a compromised third-party helpdesk (Tata), using social engineering to obtain credentials. Once inside, they deployed DragonForce ransomware and stole customer data. Caught off-guard, M&S had to revert to manual operations—logging shipments by hand—as key systems failed. Recovery took weeks, with an estimated £300M impact. The case painfully illustrates exposed gaps in continuity planning and resilience.

Co-op:

Targeted in a similar attack, with personal data from 6.5 million members stolen. Early detection and a swift response made the difference. By proactively shutting down core systems, Co-op contained the threat before ransomware could be deployed. Though they faced data loss and reputational impact, they avoided a full operational shutdown—turning a potential crisis into a manageable breach. Their preparedness and decisive containment preserved business continuity.

Harrods:

Targeted in that same campaign, reportedly spotted the attack in progress and cut off Internet access from its internal network. By rapidly isolating their environment, Harrods prevented data theft or encryption altogether. Normal operations continued with minimal impact. This shows the power of quick detection and decisive containment, essentially executing an “assume breach” playbook in real time.

The contrrast between M&S and others is striking. M&S did have security investments, but by their own admission they were “unlucky… through human error” in falling for the helpdesk scam. Once the attackers were in, it appears M&S lacked the early detection or network segmentation that might have contained the breach.

As a result, the attackers had time to exfiltrate data and trigger a ransomware event. Co-op and Harrods, on the other hand, treated the intrusion as inevitable and acted fast to limit damage. Essentially, they assumed compromise and executed drastic continuity measures (like an emergency shutdown or network cutoff) to save the enterprise.

Key Lesson:
Plan for Breach to greatly improve outcomes

Security leaders must prepare for when—not if—attackers get in.

  • Early detection and containment, as seen at Co-op and Harrods, can prevent major M&S lacked such readiness and faced costly downtime.
  • When you design systems and drills with compromise in mind, you don’t have to improvise under fire, you already know how to keep critical services running and what to shut down or isolate.

Applying Assumed Compromise

Acknowledging the “assume breach” mindset is one thing—applying it enterprise-wide is another. Here are practical steps across architecture, incident response, and detection engineering to enable continuity and rapid threat detection:

Architect for Containment and Continuity

  • Segment and isolate your network so that a breach in one area doesn’t collapse everything. Flat networks are risky—if one account is compromised, attackers can often access everything. To prevent this, implement network segmentation and least privilege access.
  • Keep critical systems on separate VLANs or cloud VPCs with strict controls. Use identity-based policies to ensure even valid credentials only access what’s necessary. This limits lateral movement and helps contain breaches.
  • Build in kill-switches or isolation capabilities:
    • If something looks wrong, administrators should be able to quarantine a host or lock down a segment quickly without bringing the whole business down. This limits the blast radius of an intruder and preserves operational continuity for the rest of the environment.
    • Design your infrastructure with resilience in mind. Redundancy and manual fallbacks are key.
    • Identify any mission-critical systems that, if ransomware hit them, would halt operations (e.g. ERP systems, payment platforms). Strengthen those by maintaining reliable offline backups, hot standby systems, or alternate manual processes.
  • Build a resilient architecture. In the case of M&S dependency on one ordering system forced a reversion to pen-and-paper when it went down. A resilient architecture would involve having either a backup system to take over or predefined manual procedures to keep essential business functions running (and staff trained to execute them).

Ask yourself: “If System X failed, can we still take orders, serve customers, or ship product?” If the answer is no, then plan a workaround in advance, don’t wait for the crisis to figure it out.

  • Secure the identity layer as part of your architecture. Attackers love to exploit identity, invest in phishing- resistant MFA (e.g. FIDO2 keys) and tighten up identity recovery workflows. This could mean requiring in-person verification for critical password resets, using identity proofing for helpdesk calls, or at least implementing callback verification to confirm that a request is legitimate.
  • Consider using conditional access policies: For instance if someone logs in from a new country or an unmanaged device, require additional verification or limit their access. The goal is to make it harder for attackers to leverage a single stolen credential to move freely.
  • Include your supply chain and partners in your security planning. Third-party vendors — especially those with network access or support roles like outsourced helpdesks — can become weak links in your security posture. Treat them as an extension of your perimeter: enforce least privilege access, segment their connections, and actively monitor their activity.
  • Proactively engage vendors about their business continuity and incident response plans and establish a clear communication strategy in case a breach on their side effects your organisation.

Building resilience means designing your environment to withstand failure or compromise — whether internal or external — by isolating threats and maintaining operational continuity.

Prepare Your Team and Plans for Incidents

Technology alone won’t save the day if your people and processes aren’t ready. An assume-compromise strategy calls for serious incident response (IR) planning and regular drills.

  • Develop a comprehensive IR plan that answers: if we detect an intruder, who does what, immediately? Ensure everyone from the SOC analysts up to executives knows their role in a major incident. This includes technical steps (e.g. network isolation procedures, backup restoration steps) and business steps (e.g. communications, legal notifications, manual workarounds).
    Crucially, practice these plans. Run realistic breach simulations and “purple team” exercises where defenders face off against simulated attacker tactics. This will train your staff to respond under pressure and reveal gaps in your procedures. You cannot build resilience without testing it.
  • Prepare for Total System Failure. One critical scenario to rehearse is a complete failure of essential systems—often called a “CHAOS exercise” or full restore test. For example, simulate the loss of your primary data centre or cloud environment. Could your business continue? Do you know how to rebuild from scratch or switch to secondary systems?
    The Co-op’s experience of shutting everything down is instructive—they had to operate in a degraded state temporarily. If you’ve practiced running in “manual mode” or restoring from backups, you’ll manage such disruptions more effectively than if it’s your first time.
  • Recovery Planning and Backup Strategy. Recovery planning should be tightly integrated with detection and response. NIST highlights the importance of having recovery plans ready to restore services quickly and maintain trust. Ensure your backups are not only in place but regularly tested—an untested backup is no backup at all. Keep some backups offline or immutable to prevent attackers from encrypting them.
  • Training and Awareness. Continuity also depends on staff awareness. Many attacks begin with phishing or phone scams, so train frontline teams—helpdesk, IT support, call centres—to spot social engineering. Foster a culture where verifying requests is encouraged, and escalation is acceptable when something feels wrong.
    In the M&S case, a simple policy—like requiring a verified ticket or manager approval for password resets— could have stopped the attack.
  • Communication plans. If a breach happens, how will you inform executives, employees, customers, and possibly regulators?
    Transparent and timely communication reduces confusion and preserves trust during an incident. Decide in advance the criteria for invoking disaster recovery mode or communicating outages.

These non-technical aspects of incident response are essential for continuity; they keep everyone on the same page and focused on the recovery.

Invest in Detection Engineering and Analytics

Assuming attackers are already inside, rapid detection is critical. Traditional signature-based tools often miss modern threats like phishing or credential theft. That’s where behavioural detection and continuous monitoring come in.

  • You should augment those with behavioural detection: Look for patterns of activity that indicate a possible intruder, even if the malware or technique is new. Mature teams are shifting to behavioural techniques, focusing on what users and processes do, not just what they look like.

For example: An employee account suddenly accesses 10 database servers it never touched before, or a user logs in at 3 AM from a foreign IP and then initiates a mass download of files, those are big red flags. Develop alerts for things like impossible travel (user logging from London then NYC in 30 minutes), multiple failed logins across various accounts (password spraying), new privileged group assignments, or the same device using multiple VPN accounts. These can all signal an attacker at work.

Use MITRE ATT&CK to Strengthen Detection Coverage

MITRE ATT&CK is a powerful framework for mapping detection capabilities. Focus on key tactics related to identity and lateral movement—Initial Access, Credential Access, Lateral Movement, and Privilege Escalation. For each, ensure you can detect or mitigate common techniques.

Examples include:

  • Credential Dumping: Monitor for tools like ntdsutil or VSSAdmin on domain controllers.
  • MFA Persistence: Alert on changes to secondary authenticators for high-value accounts.
  • Remote Access Tools: Flag unexpected installations or activity from apps like TeamViewer or AnyDesk on sensitive systems.

Systematically reviewing attacker behaviours and adding detection logic boosts your chances of early threat discovery. The Co-op detected their threat before ransomware launched—likely thanks to well-placed tripwires. You want those in your environment too.

Invest in your monitoring team or partners

Detection rules are useless if no one’s watching. Whether it’s an internal SOC or external MXDR, ensure alerts are monitored 24/7 and acted on. Use tools to reduce noise and prioritise real threats.

Adopt an “assume breach” mindset: treat every alert seriously until proven benign. Breaches often go unnoticed because alerts are dismissed or misjudged. Encourage curiosity and urgency—better a false alarm than a missed attack.

Use Threat Intelligence to Strengthen Detection

Stay informed on attacker tools and tactics, and ensure your controls address them. If threat intel highlights a tool or technique, proactively search for it in your logs.

Build detection rules based on real-world TTPs—not just theory. When gaps emerge (e.g. missing log detail or overlooked lateral movement), refine your detection engineering. This cycle of learning and improvement reflects the “Respond and Recover” principle in frameworks like NIST—use every insight to strengthen your defences.

Key Takeaway and Actions

“Assume compromise, design for continuity and detection” is about accepting that breaches will happen and making sure your organisation can withstand and limit those incidents rather than be crippled by them. By adopting this mindset, security leaders can shift their strategy from purely preventive to truly resilient.

Here are some key takeaways and action items for security leaders to consider:

Assume Breach in Every Plan

  • For any new system, project, or security control, ask how it would fare if an attacker was already on the inside.
  • Build your architecture so that an intruder can’t freely roam and a single failure won’t take down critical operations. Operating under the assumption of inevitable breach is crucial for preparedness. This mentality should be ingrained at all levels from IT teams to the boardroom.

Prioritise Detection and Response

  • Speed matters: Invest in capabilities to detect anomalous behaviour fast, whether through automated analytics or an expert SOC team (ideally both).
  • Have a well-drilled incident response process to contain attacks immediately. Remember that early detection and swift action can save millions in damage.
  • If you currently have a strong firewall but weak monitoring, realign your priorities. Prevent what you can but detect and deal with what you can’t prevent.

Design for Continuity

Don’t wait for a crisis to figure out how to keep the lights on.

Plan and test your business continuity and disaster recovery scenarios specifically for cyber incidents

  • Keep offline backups of key data. Know how to run critical processes manually or in alternate ways if needed (even if it’s using clipboards and paper in a pinch).
  • The goal is that even if part of your IT environment goes down, your company can still operate at a basic level and recover quickly. This also means establishing clear criteria for when to disconnect systems or pull the plug to protect the business (and empowering IT staff to make that call).

Embed Resilience into Culture and Processes

  • “Assume compromise” resilience is not just a technology challenge, but a cultural one. Encourage open communication about threats and response readiness.
  • Conduct regular training and tabletop exercises for both technical staff and business leadership, so that everyone understands their role when responding to an incident.
  • Break down silos between security, IT, and business continuity teams – they all need to work together when an attack strikes.
  • Make cyber resilience a board-level topic: discuss not just how to prevent breaches, but how quickly you can detect, respond, and recover when one occurs.

By focusing on these areas, security leaders can ensure that when (not if) the next breach happens, their organisation will detect it swiftly, contain it effectively, and continue operating with confidence. In today’s threat landscape, resilience is what separates disruption from disaster. The “assume compromise” mindset, backed by frameworks like NIST, helps turn cyberattacks into manageable incidents—not existential crises. Prepared organisations protect both reputation and business continuity.

×

Under attack? Call our 24/7 Incident Response Hotline now

Get in touch with an accredited Incident Response experts who can help you contain, recover and mitigate attacks.

0333 987 4048

For regular switchboard please
contact - 0333 939 8080