Skip to main content

Cyber threat detection and response is a well-established area of the cyber security industry today, with a multitude of product and service types and definitions (and many a ‘Magic Quadrant’). Yet rather than make it easier for organisations to identify what they need, this often contributes to industry noise and hype around the latest and greatest, creating a marketplace that can be challenging to navigate for buyers who are uncertain of specifically what they need, or why they need it.

This challenge is most acutely felt for organisations that have not historically been top of the cyber criminal’s hitlist (at least, prior to the recent evolution of Ransomware-driven threats, which has levelled the playing field considerably). Organisations operating in sectors without a tradition of high-profile cyber compromise typically possess a less robust cyber security posture because they have not been exposed to the same historical threat level as the sectors we typically regard as more mature.

The absence of a prolonged and targeted threat has conditioned many organisations to see cyber security through a vulnerability-centric lens (frequently driven by their customers’ compliance requirements). Where they do have a monitoring solution in place, it is often a case of ‘set and forget’ using an out-of-the-box product or service. As any organisation is a valid target for extortion, those operating in industries that have historically invested less in cyber security, specifically in detection and response, are now the primary target.

Why do detection and response services fall short?

Naturally, with many variants and methods of threat detection, the precise strengths and weaknesses of security monitoring services can vary. Some come with high visibility and telemetry but fail to identify malicious actions in real time – delaying detection, but generally increasing fidelity – whilst some rely on generating automated detections for every action, which typically increases noise and false positive propensity. Others prioritise detections earlier or later in the Kill Chain, with attendant advantages and disadvantages (early = more time to react, later = better certainty and fidelity), or place greater emphasis on either prevention or detection capability (meaning an action is identified and alerted but nothing is done about it, and vice versa).

It is important to have the capability to both detect and prevent. One without the other either means detection is inconsequential and the attack continues, or the attacker is free to try again until they inevitably succeed.

Organisations tend to rely on tools like the MITRE ATT&CK Framework to benchmark the overall detection and prevention capability of a vendor’s security monitoring service (often specifically EDR/MDR). MITRE’s catalogue of Tactics, Techniques and Procedures (TTPs) is effectively a taxonomy of all the actions that an attacker can perform as part of an attack, at the different stages of an attack’s lifecycle.

Naturally, this means that many services are evaluated against this framework – either using a subset of TTPs assigned to a specific threat actor (such as the recent MITRE Engenuity assessment, which mimicked techniques associated with Wizard Spider and Sandworm) or against the Framework as a whole.

The primary limitation of typical evaluations is that they fail to represent the real-world environment that the service is likely to be deployed to, being conducted on an unrepresentative sample rig (or even just a handful of endpoints). This means that:

  • The majority of the TTPs will not be relevant to the specific environment being assessed, making a vast proportion of the detection logic redundant. Reporting ‘gaps’ in this sense can lead organisations to invest in areas that add no value; e.g. identifying your detection capability for Mac as a weakness is not relevant if you don’t have any Macs. This seems obvious, but for some the allure of MITRE Bingo is too strong to resist.
  • The sample environment fails to accurately represent the complexity and context of a network, or account for the quirks that can exist for one organisation which could be unthinkable for another. For example, having an unusual subset of users all with local admin privileges (more common than we’d like to believe).
  • A number of characteristics of the service cannot be evaluated in sufficient detail. For example, the quality of managed service elements, such as how quickly the vendor reacted to and investigated an alert, or how effectively actions could be linked together as part of an attack chain within the detection logic.

Ultimately, detection and prevention of discrete actions is only the beginning of a security vendor’s response to and containment of a security incident, which cannot be gleaned from this type of evaluation.

The result of these uniform evaluations is that most providers typically perform quite well. A quick Google search shows us just a handful of the organizations scoring 100% in the technical evaluation. But we know that each of these vendors does not perform exactly the same in reality.

Effectively evaluating security monitoring services

We recently ran our own assessment as part of a client engagement, the findings from which we presented at our most recent industry briefing event. Highlights from the event can be found here.

Finding the solution that is right for your organization

Investing in a security monitoring service is an important and often sizeable purchase which is essential to get right. Many don’t, and are locked-in to a service provider that fails to deliver the silver bullet that the marketing materials and sales rep promised.

Given the magnitude of the purchase, it’s worth taking the time to properly evaluate potential suppliers without relying on a third-party assessment (such as MITRE Engenuity) alone. If you’re planning such an evaluation, we recommend you consider the following guidelines:

  • Ensure the test environment closely replicates real life. Warts and all – this includes your outdated servers and user groups with excessive privileges. If these issues are known and risk-accepted (for whatever reason) then the vendor will have to work within that reality and ensure that detection logic or manual investigations are applied to close that gap.

This is likely to be a non-standard undertaking for the vendor, which will put many outside their comfort zone. It’s important to be fair and not expect such a POC for free. Considering that, vendors who are reluctant to comply should be removed from consideration – it’s probably an indicator that they know they will perform worse in a setting that is outside of their control.

  • Use representative attack paths and ensure that attacks are followed to completion. It is important to evaluate detection chronologically, allowing for flexibility in terms of the TTPs leveraged, to ensure the attack is representative of an adaptive human attacker. This will highlight errors in detection logic and attack chaining where the vendors are relying on linking generic detections as opposed to actually understanding the environment they are defending.
  • Assess wider metrics outside of detection and prevention. While these capability areas are sometimes less measurable, even anecdotal insight to factors like how accessible and communicative the vendor was, the quality and accuracy of alerts provided, and how quickly an alert was reacted and responded to, can prove invaluable in selecting the right vendor for you. We observed significant deviations in the managed aspect of the services evaluated.

Organisations without internal security team personnel dedicated to monitoring should avoid product-centric offerings without a proper managed component. When you spot things going wrong, you don’t want to be in a position where your security partner is absent, or insists that they can’t see evidence of a compromise. Looks fine to me!

When helping our clients to validate which vendor is best suited to their needs, we often find that the providers who rank highest are not those with the shiniest product (most EDR products and interfaces are comparable) or necessarily those with ‘100% MITRE ATT&CK Framework coverage’. We regularly find significant control gaps that result in the generic MITRE-aligned controls failing when applied to a realistic testing environment – so don’t let a 100% score give you a false sense of security.

The best providers are those who are willing to listen and work with you to ensure the defences they provide are tailored to and appropriate for your organisation and network, and will actually do something about it when an alert is raised, or an issue is identified. Make sure yours won’t let you down when it matters most.

Dan-Green

Dan Green

Head of Enablement

As Head of Enablement at JUMPSEC, Dan is responsible for shaping the solutions that JUMPSEC offer, working with our clients to ensure we deliver the outcomes they need.