Peter Drucker once famously started “What gets measured gets managed”. Metrics are an important part of running any part of a security program. This is especially true for vulnerability management, heavily influencing risk and loss exposure potential.
Measurement can’t and shouldn’t focus only on how many vulnerabilities are being identified. Having a mountain of in-addressed issues does little to improve risk posture. This article will break down 4 key metrics that security leaders should be thinking about within their vulnerability management programs.
Mean Time to Remediate
The mean time to remediate (MTTR) is important because this program succeeds when an organization can quickly respond to important issues with a fix. Measure this in aggregate across all vulnerabilities or with a measure of specificity. For example:
- MTTR for applications of different risk profiles
- MTTR for different buckets of vulnerability risk ratings (critical, high, medium, low)
- MTTR for vulnerabilities with verified threat intelligence versus those without it
Improving with respect to MTTR should focus on driving the number down as low as is feasible.
Number of Vulnerabilities Identified by Class
Clint Gibler gave a slew of fantastic talks on the idea of systematically remediating classes of vulnerabilities built on very intentional security listing and secure defaults. Identifying clusters of vulnerability types, such as SQL injections or a particular configuration issue is helpful. It’s especially helpful when you can overlay that with a type of similar technology. When a security team finds consistently occurring vulnerability clusters within similar technology stacks, this is a key opportunity.
Security teams can position themselves and their organizations for scaling opportunities by leveraging some of the principles outlined by Clint:
- Identify safe configuration options to address the vulnerability class at hand (developer frameworks, libraries, infrastructure, etc)
- Write and deploy security linting rules to check for the continued adherence to the safe configuration option on every built
Vulnerabilities Identified Scan Over Scan by Source
Shift left. That phrase has been the cornerstone of many marketing hype messages and many devsecops strategies across the industry. It’s all about security activities, including but not limited to scanning, moving further left in a developer’s lifecycle. As an application security program matures over time it should ideally see fewer vulnerabilities identified at later stages of an application lifecycle. Tools (or processes that generate vulnerabilities such as a penetration test) can be mapped to particular phases of an application lifecycle. For example, penetration testing occurs in the latter stages pre- or post-delivery to production. By contrast, software composition analysis (SCA) occurs further upstream during active development.
Vulnerabilities that are consistently found later and later may be a sign of missed opportunity to do upstream identification or prevention work. Engage in regular retrospectives or periodic postmortem reviews to seek out and explore possible opportunities to shift left.
Coverage
It is very common for organizations to manage a multi-application or service portfolio, it could be small or it could number in the thousands. At any size over 1, it’s important to ensure coverage across your portfolio. When an organization purchases the world’s best software composition analysis solution but only configures it to run on 1% of the applications in the organization portfolio, it’s ineffective. Think about coverage at the functional capability level for each application:
- Cloud or infrastructure configuration scanning
- Network and endpoint scanning
- Software composition analysis
- Static code analysis
- Dynamic application security testing
- Manual penetration testing
- WAF deployment
Not every application needs the metaphorical kitchen sink. Approach this problem from a risk-based perspective and align the density of activities with the level of risk associated with an application.
Concluding Thoughts
These metrics are intended to serve as a starting point for measuring growth in vulnerability management. It’s important that measurements are in a place where they can be consistently and reliably collected. Data and consistency will empower more sound decision-making for security leaders. It also builds trust with stakeholders. Understanding what security does all day with all of these tools and why that matters is both compelling and necessary. Take these metrics, try to measure them, report on them for 6-12 months and see if your risk posture improves. If they don’t work for you, my hope is that the process spurs ideas for your own measurements that are more contextually relevant for the organization.