A growing scale of digital networks across on-premises, cloud and hybrid environments has necessitated automation and orchestration in order to process vast amounts of structured and unstructured data required for security event analysis and response. Similarly, limited human resources to manage complexity at increasing scale spawned the Managed Security Service Provider (MSSP) category to meet market demands for device and alert management.
As with any business, MSSP profitability relies on greater efficiency. But increasing competition has diminished what once was a 60 - 70 percent margin business model to 30 - 40 percent. As a result, the pressure for greater efficiency has forced MSSPs to center attention on automation and orchestration of predicted taks. This capability is marketed as a value add to clients. However, over reliance on automation and orchestration without human expertise involved at the right time and place in the chain of security event analysis can increase the risk of false positives and false negatives. And, when human intervention is required in event analysis and response, cost of goods naturally increases.
To maintain margins, MSSPs are removing or minimizing human intervention, but it’s very challenging to strike the appropriate balance to properly mitigate risk and be accountable to the context of each individual client. Man and machine can create a symbiotic relationship, yet no man can analyze at the scale of a machine and no machine has the ability to pass judgement on new and unknown situations. And machines can replicate intuition to a degree, but humans can look at a situation and make a clear judgement backed by reasoning to mitigate a previously unknown situation.
To combat automation and orchestration apprehension, MSSPs have capitalized on the term “human threat hunting.” While not a new concept, the degree to which it might be delivered by MSSPs puts into question the efficacy of their ability to detect and hunt yet-to-be discovered cyber threats. This means those shopping for a security service provider need to ask the right questions. What one service provider means by human threat hunting could have completely different meaning from another. By definition, human threat hunting should follow a proactive, analyst-driven process to search for attacker TTPs within an environment.
Source: Sans Institute: A Practical Model for Conducting Cyber Threat Hunting
In this model, attacker TTPs must be researched and understood to know what to search for in collected data. Information about attacker TTPs most often derives from signatures, indicators, and behaviors observed from threat intelligence sources. While this model is agnostic of the analytic techniques employed such as the use of machine learning or stateful analysis, stages within the model require human judgement (exercised when the objective function for a particular set of decisions cannot be described (i.e., coded) that cannot solely be by machine). In fact, the degree to which machine is given the capability to autonomously predict results in riskier decisions could create greater variance of outcomes. Given the balance of cost vs. level of service passed onto the client, in the MSSP model human expertise has been cleverly parsed into separate add-on services and more costs that reintroduce the human element to standard service models:
- Targeted Threat Hunting as a standalone or in the form of Incident Response Retainers
- Additional levels of SOC analysis
- Forensic investigation in the form of Incident Response Retainers
- Co-remediation in the form of Incident Response Retainers
Just remember: the more an MSSP is able to remove the human element the greater their ability to sustain or increase margins. Consequently your organization incurs not only possible additional costs in the form of add-on services, but additional risk in the form of missed threats and false positives. To determine the level of balance between automation and orchestration with human expertise and your associated level of possible risk, ask your MSSP the following questions:
- How do you identify / look for when an attacker may have adapted to hide within an automated flow that marks something as a false positive?
- How can I see what steps you have taken to investigate an alert that was not escalated but was sent to you from one of my devices?