How threat intelligence can improve vulnerability management outcomes  

How threat intelligence can improve vulnerability management outcomes  

Amid dynamic threats, organisations must employ a robust, risk-focused approach to vulnerability management. Chris Jacob, Global Vice President, Threat Intelligence Engineers, ThreatQuotient, shares his thoughts on why leveraging data-driven threat intelligence streamlines prioritisation, resource allocation and process automation for enhanced security.  

Chris Jacob, Global Vice President, Threat Intelligence Engineers, ThreatQuotient

It might surprise you to know that more than 70 new vulnerabilities are published every day. Despite their risk-reducing value in helping SOC teams address these, vulnerability management solutions have drawbacks. Often, they only provide a snapshot of an organisation’s vulnerabilities at a point in time.  

In fact, owing to their nature, vulnerabilities identified today may not exist tomorrow, or they may appear and disappear intermittently. This leaves security teams scrambling to understand not only what the risk is, but how it affects them and where they should start first with any remediation. 

Often vulnerability management solutions struggle to support SOC teams effectively, meaning they face an uphill battle with fragmented tools and data silos. This in turn creates major challenges around alert fatigue and overloaded SOC teams who, despite all the tools available to them, end up undertaking manual investigations to determine the best response.  

The problems are complex and wide ranging 

For those less familiar, vulnerability management is the practice of continuously discovering, classifying, prioritising and responding to software, hardware and network vulnerabilities. However, the problems with vulnerability management are complex and wide ranging, from technology to policy and governance. With the modern enterprise evolving to become more technologically distributed and cloud-aligned, the challenge is becoming even more multifaceted.  

I say this because end-to-end visibility into an organisation’s technology stack is becoming harder to achieve, with shadow-IT only exacerbating issues. Limited resources result in cybersecurity maintenance tasks that are never completed. Additionally, the scope and impact of software supply chain risk is only just starting to become properly understood by those outside the software development industry.  

Unfortunately, those that are responsible for patching and fixing software vulnerabilities are rarely involved in the technology selection process, leading to a lack of learning and improvement in technology selection choices. Layer onto this the escalating compliance landscape, and it is easy to see how overwhelming the task is. As a result, it is simply impossible to patch and mitigate every software vulnerability present in an enterprise network. 

Historically, organisations would prioritise mitigation based on limited and inward-facing data, such as server versus workstation, an employee’s role, asset criticality, vulnerability score and patch availability. But despite this level of prioritisation, patching remains a time-consuming task. This approach also has limited effectiveness because it doesn’t consider knowledge of how that vulnerability is actively being exploited in the wild, and the risks associated by those adversaries leveraging it, to a company’s specific environment. 

Not all assets are created equal  

Most companies focus more on the consequences and severity of a vulnerability versus the likelihood they may be impacted. Of course both are important, but if you focus too much on severity and consequence, you may not see the complete picture. CVSS scores, for example, focus mainly on severity, with global values for likelihood that are assumed valid for all organisations – this is a mistaken assumption. Yes, a vulnerability may be critical and of highest severity, but this vulnerability is more or less relevant to your own organisation because of the threats that target it. This is where custom likelihood comes in. Understanding your own likelihood is critical for prioritisation and triage. 

The modern enterprise has a new wealth of internal and external data to make more data-informed choices regarding actions to take, and the threats to respond to. While exposure is an important input into the risk equation, it only really has relevance once certain elements of the vulnerability lifecycle are hit.  

For example: What is the cost for adversaries to develop exploitation tools for the vulnerability, or is it now available within the existing off-the-shelf attack tool sets? This is one of the largest influencers of likelihood of it targeting the masses. Does exploitation of the vulnerability result in a situation that fits into the threat actor’s tools, techniques and procedures (TTP) sweet spot, meaning it’s easy for them to execute upon their objective?  

Making data-driven decisions  

These are elements that the enterprise has absolutely no control over but can get visibility into to get ahead of the response process if answers to any of these questions is ‘yes’. Or they can be used as critical inputs into a decision process to stop current mitigation efforts, and pivot to other issues that are potentially more likely to impact the organisation. This is where using threat intelligence in conjunction with established vulnerability management practices can help organisations identify, prioritise and remediate vulnerabilities that have a higher risk profile or have the potential to have a greater impact on an organisation. 

To aid practitioners in vulnerability triage, it is desirable to have a list of vulnerability identifiers, presented in a prioritised list for mitigation. With the likelihood of exploitation being a key multiplier within the risk equation, it’s critical to have accurate, up to date and verifiable information that can help the organisation understand the details of likelihood.  

By combining information available from a variety of public and private, internal and external sources, prioritisation lists can be improved for greater accuracy. Automated assessment and scoring of a vulnerability likelihood is only now possible across the many disparate data sources by consolidating data into a single record of truth about what is known about the vulnerability.  

Once a vulnerability hits a threshold value, or key elements of context are identified, automated actions can be initiated. The often-missed element of context about the vulnerability is the likelihood of exploitation.  

Moving away from legacy patching approaches 

This helps move vulnerability management from its legacy (‘must patch everything’) approach to a business focused risk-based approach and allows security teams to become part of the business decision-making process. Using threat intelligence from internal and external sources, vulnerability management teams can identify, validate and orchestrate the entire vulnerability management lifecycle of key assets that are at risk and quickly address and remediate through automation and collaboration. 

In today’s ever-evolving threat landscape organisations need to adopt a structured and efficient risk-based process for managing vulnerabilities to reduce the risk of security breaches and improve overall security posture. A data-driven threat intelligence approach is essential in this regard, as it enables organisations to identify and prioritise vulnerabilities accurately, allocate resources effectively and automate processes with high fidelity data.