When it comes to classic processes like identifying, prioritizing, and tracking scanner-based vulnerabilities, I like to dive into strategic areas like performance targets and fix service levels. Who still worries about finding vulns and deciding which should be fixed? Turns out more than I thought, especially for mid-sized PCI shops in the early stages of an infosec program.
I mentioned this to a friend, Ed Marlow, who's (currently) an independent consultant. Most enterprises worked out the basics long ago. However where do you go for reference material if you have to build a vulnerability management (VM) process from scratch? For fun, Ed and I started sketching out the process and noting public examples. My first stop was ye ol' NIST site. Their Creating a Patch and Vulnerability Management Program guide is on par with other NIST work and covers much of "what" needs building vs. how. I always learn something from NIST, even if it's just my relief that I don't have to do everything they say! Of course there's a ton of blog and article content on VM e.g. a pretty good ISACA article popped up. Instead of rehashing content, Ed and I looked to see if certain topics needed more attention. We found a few under served areas: focusing on risk vs. just vulnerability, prioritization examples, and mitigation performance levels.
Here's a slide I use to place these in the larger process:
It's All In The Risk
While some reference materials do a pretty good job, I prefer to jump right to the risk conversation. The only person who cares about a raw CVSS score is the PCI auditor. Everyone else needs to prioritize their work and focus on risk - what is the likelihood and severity of an agent exploiting that vuln? Plus, we don't have time for an exhaustive analysis. Our workflow needs to scale. We need to prioritize 20-30 (or more) vulns in a one hour meeting with security, application, and sys admins who rather be doing something else. It's also important to have an exception path in the process to dig deeper in the risk equation for contentious or costly vulns. Otherwise this process needs to be putting vulns in buckets efficiently.
Step 1: Impact
The best way to scale prioritization is through defined tables. The key to success is to identify which systems are affected by the vulnerability and the class of data they support. Here's where I say to Ed that anyone building a VM program already has a data classification policy. Ed just looks at me...
Since we're focusing on VM I won't go into building a data classification policy, some quick references: ISACA, IIA, even Infosecisland. Each data class should include examples of regulated data, recovery time objectives, and descriptions affecting the financial, brand, and strategic impacts for loss of confidentiality or integrity. If you haven't applied data classes to asset groups defined in your vulnerability scanner, add the project to your risk register to streamline the VM process.
Step 2: Will It Happen To Us?
Given all the data from the vulnerable vendor's advisory, scanning vendor, and CVSS, the general questions are answered. The important part is to determine the priority in your environment. Does the nature of your network, system builds, or compensating controls raise or lower the chance of successful exploitation. Plus, will damage be amplified e.g. propagated, or minimized/contained? Here's a simple example to help the group raise or lower priority.
Step 3: Bucket Time
Selecting a mitigation time frame by asset class and vuln priority is straight forward once you agree on the service levels with stakeholders. For example:
Defining mitigation date ranges is a negotiation between stakeholders and represents a acceptable risk. Obviously the asset owners are accountable for prioritizing assets. It's the control owners (app/db/sys admin) job to commit to a service level given their costs and constraints. It's security's job to prioritize the vuln and measure the overall performance of the process. If you're new to RACI's, they can save a lot of consternation. (Recall, there can be only one Accountable)
How's IT Going (clever?)
I've written before how I think the best measurement for VM is % of vulns mitigated per Service Level. I think it's important to emphasize because patch and config is such a core IT service. If your company struggles in this area it's best to make it visible and ensure management understands the risks. Aging your vulns and comparing against a service level can be tricky depending on your tools and volume. However the visual is definitely worth it (snip of our tool below).
The added bonus of measuring VM performance is to see if management wants more measurement and more conversations about what's acceptable.