Security has a lot of tools. We have tools to scan networks, code, open-source libraries, databases, cloud configuration, endpoints, infrastructure as code, and more. As security teams, among our key modes of communication are vulnerability reports. More specifically, identifying these issues and letting others know about them with the appropriate context so they can be fixed.
However, there are several problems with this. In this article, I’ll break down a few of them.
If a security team isn’t able to effectively communicate to others about vulnerabilities, it’s going to be very difficult to manage them. All teams have priorities, and they almost certainly include work that is not security related by design.
Note: For the purposes of this article, I think of the terms “reports” and “dashboards” somewhat interchangeably. They are meant to reference the output of a scan, whether it’s pushed or pulled by the recipients.
State Scan Over Scan
Scanning tools aren’t always consistent when it comes to tracking the state of an asset, scan over scan. Does the tool treat each scan as fresh? Does the tool track a specific instance of a vulnerability over the course of multiple scans? What happens when the asset changes in some way in between scans? Should that be considered a new vulnerability or the same one with the same vulnerabilities? How might the tool handle ephemeral infrastructure?
The point of these questions is not to highlight some “right” answer in these scenarios. Rather, to highlight that the complexities in state management are likely to be treated differently by different tools in your stack, leading towards a general inconsistency.
Risk Rating Inconsistency
The security industry, as a whole, has a wildly inconsistent way of talking about risk. This trend is amplified in industry tools and wrapped in flashy dashboards. Some tools describe risk on a numeric ordinal scale (0-100). Other tools use low, medium, and high risk rankings. Other tools incorporate CVSS scores in an attempt to quantify the risk of an issue.
Even if two tools use the same scale, they may apply the calculation leading to classification differently. This puts a tremendous amount of pressure on recipients to be able to properly interpret all of this data amidst their many other priorities.
I’ll be getting into more detail on this particular topic in a future article.
Pure Volume
As a field, we’re pushing for faster delivery and continuous delivery of software. Vulnerability scanning will inevitably need to keep up. More scanning means more frequent notifications of results—more noise.
The more our field wants to do, the more data we will generate. Without proper tuning, which not every tool makes easy to do, we risk creating a vacuum of white noise that is easy to ignore over time.
Learning and Switching Costs
The cognitive load that recipients of these reports or dashboards take on increases with each additional tool. Every tool its own UX, its own login flow, its own vulnerability interpretations.
It takes time and energy to learn a new tool, even if you’re just a consumer of the work it’s doing. It takes more time and energy to reason about the results from one tool to another. Even more than that, knowing how important some result is compared to the other work opportunities a team has, such as building a new critical feature or paying down tech debt, is hard.
More Is Not Necessarily Better
This article really scratches the surface on the myriad problems that tool overload creates. As the technology ecosystem and systems development increases in complexity, it’s likely that we’ll see this problem in some form increase with some equivalence.
More attack surface, more technology types, more tools to manage it. The answer can’t be to consistently throw more and more resources at the problem.