April 30, 2015

April 10, 2015

Please reload

Recent Posts

I'm busy working on my blog posts. Watch this space!

Please reload

Featured Posts

We’re all Medium-High

June 4, 2010

 

🙂

Funny title and it even applies to risk management   No matter what risk model you're using, it's possible (common?) to encounter the situation when a super majority of risks are sitting in the same band of impact and likelihood. I call this clumping. This can be a good thing assuming you caught the observation before your stakeholders. It's good because it's an opportunity to improve:

 

The quality of your inputs. This is usually the case when you don't have actual incident data and it's a great gut check to see if you're evaluating opinion vs. evidence. I encourage security professionals to apply their experience and assert an opinion. Models don't think for you. Predicting the future is hard but you have to have concrete examples to back up your prediction. If you don't have evidence, state that fact and lower the severity of your inputs. When you're wrong, it's documented and you can get smarter.  This is related to the Evidence is Gold post.

 

Type of inputs. If you don't have real examples, go get them. If you don't have budget to conduct an assessment, I suggest stating this fact and retitle the risk as "Unknown impact/likelihood of xxxxxx." This is a great time to declare the existence of a  dark corner and you need resources to shed some light. The action is simply to conduct an assessment before committing additional resources to mitigate an opinion. I've used this many times and it works well. No one likes spending millions on unvalidated risks.

Side note: don't let fear of an audit finding be your evidence. Audit folks are reasonable when presented with a business case. If not, they won't be there long after the embarrassment at the BoD meeting.

 

Your process. Revisit your guidance and education for those conducting assessments. I had to emphasize that selecting an input is a big deal. Each element in the risk model should be defensible. If you have a number of areas with a lack of evidence, that's a good story too.

 

Ways to Address. One of my favorite forcing functions is to work backward from the executive discussion. At the end of the day you're either going to 1. do something, 2. continue to monitor, 3. accept. After you have a solid story across your inputs, re-evaluate which bucket your risk goes into. That's why in Risk Communicator, we introduce hard color bands to force the conversation. There's room to prioritize within each band (act, eval, accept) but inclusion in the band is meaningful.

What about edge cases i.e. low probability/high impact? 

 

If you or your industry haven't had a complete compromise or oil rig explosion, this is when you present the scenarios to your business owners. Edge cases should not be decided by security alone. There's power in making the business accountable and it's the best way to get investments to invest proactively. This isn't CYA, it's  good risk management because one party is not all-knowing.

Are qualitative assessments more inclined to risk clumping than quantitative?

In my experience, no. It depends if your model has sufficient granularity in the act, eval, accept areas (whether they're explicit or not) and the quality of inputs. Admittedly, I haven't had luck with quantitative models but I'm not a stats expert. I welcome comments and education from you out there. For me, it's all about the business case. Models are tools for efficiency in organizing and prioritizing large amounts of data and opinion. An output from a monte carlo simulation must withstand the same evidence evaluation exercise as a qualitative assertion. Feedback most welcome!

Share on Facebook
Share on Twitter
Please reload

Follow Us

I'm busy working on my blog posts. Watch this space!

Please reload

Search By Tags