close search bar

Sorry, not available in this language yet

close language selection

Vulnerability management: Designing severity risk ranking systems

Synopsys Editorial Team

Oct 19, 2016 / 3 min read

One of the first challenges most security teams tackle is defect discovery. Soon afterwards, the bugs start piling up. I often work with organizations struggling to consistently risk rank issues into severity categories. There are many factors to consider in this process, not to mention the amount of brain power going into devising the perfect severity system.

Even the most popular industry-accepted systems might be a square peg in a round hole if it’s not a good fit for your organization. For instance, the Common Vulnerability Scoring System (CVSS) tends to be overly complex to implement for most organizations. While it’s very useful for infrastructure issues, its ability to manage the contextual complexity of application vulnerabilities is lacking.

Other systems are unintuitive. Take the PCI DSS severity levels, for instance. This categorization system is plagued with unintuitive terminology. For example, it’s not obvious that “urgent” is more severe than “critical” when it comes to vulnerabilities. Additionally, program owners typically work to consolidate vulnerability data from multiple sources that aren’t leveraged in the same criteria or to the same scale.


How to Build a Software Security Initiative in 5 Steps

Rather than going on about the nuances of how individual scoring systems are defective, I want to emphasize that the underlying goal of a severity system is to prioritize remediation efforts. As such, my advice to customers is typically to use any approach they prefer, as long as it accomplishes these 5 strategic objectives:

Objective 1: Distribute issues evenly across categories

Systems pushing a majority of issues into a single category are traditionally less effective. They also often have many flaws due to someone attempting to hide risk. For example, a healthy system might have approximately 25% of all issues in each of the following categories:

  • Critical
  • High
  • Medium
  • Low

It’s a good idea to also have an exception category for vulnerabilities that need immediate attention. For instance, a vulnerability that is actively exploited in association with an active security instance. I recommend an “Emergency” designation for such issues.

Objective 2: Assign a numerical score to underlying system security levels

Assigning granular numbers allows you to differentiate between any two issues in a more comparative way than just using severity category names. This approach also improves an organization’s risk lens, driving the most effective remediation process with the help of raw data.

Objective 3: Consider the context of the vulnerability among your known threats and attack vectors

Many moons ago, I looked at a risk system that was in place at a Fortune 500 firm. It was substantially weighted by whether or not systems directly supported a critical business application. Additionally, the severity of issues with systems that weren’t in use were lowered to zero (meaning they were never fixed, or even considered). The problem here is that the systems were sharing the same environments. An attacker can very easily compromise the unmanaged older systems and gain access to the databases for critical applications. The risk lens in place didn’t understand the assets, controls, and threat perspectives in place.

Objective 4: Make the system only as complicated as necessary to work for you

The system should only be complex enough to get the job done. Start simple. Evolve as necessary regarding the support of remediation needs. It’s good to add granularity to a system in order to focus remediation efforts. However, forcing everyone to guess at the potential losses associated with missing patches can create a recipe for disaster.

Objective 5: Treat gaps in defect visibility the same way you treat critical risks

I’ve seen system administrators add firewall blocks to prevent vulnerability scanning from discovering and reporting issues. That may be an extreme example of what not to do. Nonetheless, the reality is that defect discovery coverage has a gap due to access or logistical issues. It doesn’t mean that no risk exists. Instead, it means that the risk is unknown.

It’s best to synthesize an issue into your defect stack saying, “This device is out of security spec.” This allows those issues to float to the top of the pile.

Summing it up

Aiming for these five strategic objectives leads to the most important element of success for any security defect system. Score a security touchdown by supporting the strategy to resolve the highest priority vulnerabilities first.


How to Build a Software Security Initiative in 5 Steps

Continue Reading

Explore Topics