April 30, 2015

April 10, 2015

Please reload

Recent Posts

I'm busy working on my blog posts. Watch this space!

Please reload

Featured Posts

Risk Models: They Don’t All Suck

January 11, 2010

I had a good time reading a Securosis thread how to measure risk framework effectiveness. Check it out. Securosis is often informative and entertaining. This was just a blog conversation but some themes are worth expanding beyond the comment box. One was the pessimism of risk models in general. Words like placebo, waste, irrelevant were thrown around (see thread for context). These are smart folks and I'm sure their comments reflect their experience.

 

One comment I really agree with, a risk model is a means not an end. So a means to answer what:

 

- where should you spend your security dollar?

 

- what is the business risk if you don't receive the requested resources?

 

- how effective are your spending decisions?

 

I like to start with what a risk model can not do: it can't think for you (maybe a better title for this post :). I define a garbage risk model as any model that abstracts why you made your loss prediction. If it's too complex to quickly understand the reasoning it's garbage because it will be challenged and you'll lose credibility. I've only been successful with a risk model that simply enables you to consistently and clearly apply your evidence and experience.

 

So enough pontification, here's our boring and simple approach (should I paste in screenshots or would you prefer to hit pause on our youtube video?). Risk = likelihood * impact. For each, type in your supporting evidence and experience. Want to break likelihood and impact down farther? Click to add more detail. Likelihood is comprised of vulnerability attributes and current control effectiveness - type in your evidence. Impact is comprised of asset value and exposure attributes - type in your evidence. We have a pretty heatmap to display it but the magic is to click on a risk and see the evidence explaining why one risk is more important than another. Thus, it's easy to debate and refine evidence and risk ranking, easy to communicate up and down as needed, and easy to compare your prediction today to actual future events. When you're wrong, update your evidence and be that much wiser. Even graph it to really rub it in.

 

Sure you're applying subjective analysis with your evidence. That's what you get paid to do. I used to tell my teams, "embrace the subjectivity, just back it up with evidence."

 

Explaining your prioritization in a simple way is only a step to answer the above questions. We'll either blog or produce videos showing how we facilitate the above and the rest. A man much smarter than me once said, "it's not that hard." And it isn't if you keep it simple and transparent.

Share on Facebook
Share on Twitter
Please reload

Follow Us

I'm busy working on my blog posts. Watch this space!

Please reload

Search By Tags