April 30, 2015

April 10, 2015

Please reload

Recent Posts

I'm busy working on my blog posts. Watch this space!

Please reload

Featured Posts

How To Measure Anything: Great Book, Practical Takeaways

July 21, 2010

Is it a problem that I sneak away on vacation to digest How To Measure Anything by Douglas Hubbard? Probably. But if loving to improve risk management is wrong, I don't wanna be right  I'll try to keep this short since it could turn into a series. Let me know if you want to discuss specific areas. I'll also try to map some takeaways back to our work here as I go.


First off, this is a great book. If you've read this far into the post, go get this book. If you're familiar with measurement approaches you won't read any revelations. However you might experience a few revelations by applying the material to your situation and background. Here's my quick.


Section I: Measurement The Solution Exists


Mostly cheer-leading, but good to get pumped up. This is a nice quick read and will leave you nodding.


Section II: Before You Measure


Ding dong! Good stuff. The best section of the book. I could write a post for each chapter.


Chapter 4: Clarifying the Measurement Problem. Gets to the root of "why" we measure and how to approach risk management. The chapter also includes a nice, simple example of Hubbard's work for the VA measuring the impact of virus attacks. This is a great example why documenting incident data is crucial. Solidifying evidence continues to serve long after the dust settles. It's not hard to measure and prioritize risk with real data. No pretty charts or fancy math required.


Chapter 5: Calibrated Estimates: How Much Do You Know Now? I think this is one of the most important chapters of the book. Hubbard details how the art of estimating is taught and applied. He instructs, and tests, you to swallow your over-confident intuition and create defensible estimates using 90% confidence intervals (CIs).  My view is calibrated estimates are the basis for any risk model, quantitative or qualitative.


Chapter 6: Measuring Risk: Introduction to the Monte Carlo Simulation. I thought this was a nice overview. I was left wanting more but probably because I've researched Monte Sims in my past. One area I felt slighted was the overview of selecting the proper distribution for the Sim. Hubbard does a wonderful job of instructing readers to learn about CI's, but selecting a disti is another key input where I wanted more insight.


Momentary side-bar: Our dev lead, Jon (who has a masters in math btw), has a great quote: risk models shouldn't just calculate something, they should communicate something. I've seen Jon create distributions across various attacks. Some disti's are Poisson, maybe some are normal, most are not known. So the key is to research and use a disti, Pert, uniform, normal, etc. that applies to your risk scenario and your environment. I'm still convinced that by the time you have the evidence to understand the proper CIs and distribution to run through a monte sim, the improvement in accuracy does not warrant the overhead and education of the simulation. You have sufficient evidence to prioritize risk, and more importantly, communicate the risk in a scenario a non-technical stakeholder easily understands.


Collecting evidence and applying your experience to support your 90% CI is a great thing. Applying a difficult to defend distribution into your communication is another. In my experience, the straightest path from evidence to the risk decision is best. Any complexity or deviation from the straight path introduces uncertainty and exposes your team's credibility when facilitating risk tolerance decisions. For an out of context example, Hubbard references a Stanford professor who suggests a role of Chief Probability Officer to oversee the quality of Probability Management with examples such as a distribution library. My takeaway was this is not to be taken lightly and requires research. I could be wrong but it's clear a lot of training and vetting needs to occur. Back to the chapters.


Chapter 7: Measuring the Value of Information. I love this chapter. It reinforces the need to focus on which information matters most and when to invest more time to collect evidence and reach our 90% CI. Concepts of "Expected Value of Information" vs. "Expected Value of Perfect Information" are wonderful guides showing us when to keep investing in evidence collection.


For brevity, I'll condense Sections III & IV. More great information on focusing what to measure, sampling, and an intro to Bayesian statistics. Section IV "Beyond the Basics" dives into understanding, and respecting, our intuition as experts. It also provides models how to reign in our intuition and limit our many biases. The book wraps up by bringing all the elements together through  examples of Hubbard's Applied Information Economics.


Some additional takeaways:


Hubbard is obviously not a supporter of qualitative analysis. He does a great job highlighting the many shortcomings of qual and shows how to overcome in quant. In short, I think there's a lot to apply from these lessons to improve current qualitative approaches. Don't expect this book to instantly replace your qualitative approach if you use one. Some quick examples:


- Relative scales: some qual scales don't respect the degree of variance across impacts. I agree. And they don't have to. Qual scales don't have to be linear. For example, a qual scale of 1-10 can have non-linear monetary impact ranges from 1-2 vs. 9-10. It's up to your evidence to decide and you to define and communicate. Also, if you find a risk with such a high degree of variance, you probably need more evidence to calibrate your estimates or you might really be evaluating more than one risk.


- Degree of reproducibility (precision in Hubbard's terminology). Being able to produce consistent assessments is a problem in any model. That's why Hubbard focuses so much on Calibrated Estimates and it's a great thing. This same rigor must be applied to qualitative assessments. Personally, I didn't see how a disciplined quant application, by default, is more precise than a disciplined qual application.


- Accuracy: Qualitative assessments are notorious for vague ranges. Again, the same is true for quants if you don't achieve calibrated CIs. This isn't done through math. It's done through evidence and experience. Real gum shoe work is required e.g. Evidence is Gold. There's also no reason why you can't include quantitative ranges in your qualitative scenarios when you have the evidence to back it up.

Another aspect of accuracy is how many variables are really needed to predict future loss. One thing that makes Monte Sims great is their ability to analyze a large number of variables. How many variables are needed to predict a frequency of a successful attack - control effectiveness, attempt frequency, ease of exploit, attacker capability, etc. I don't know the exact number but it isn't more than a handful. In my experience, predicting ranges is best addressed through evidence collection (oops, getting redundant). I believe 90% of risk management should be done in the field. The remaining portion should be modeling to facilitate impact ranges with asset owners, translating your evidence to predict frequency, and then make an informed business decision. The process works when you have great evidence and role definition, regardless of your model.


In my experience an efficient process is the key. I think Hubbard makes this point soundly too.

Whew. Apologies for the length of this. Gotta run. There's a ton of great content and I only scratched the surface. Let me have it and please help expand my and everyone's knowledge. I'm a fan of quantitative analysis and will continue to be a student to improve our ability to measure and facilitate risk decisions.

In my personal opinion, the overall effectiveness of quantitative models don't currently surpass qualitative for information security. I'll keep learning but will agree to disagree with some on this point.

Almost forgot one HUGE note. A couple of Hubbard's blog articles highlights (I want to say calls out) the whole industry of risk management for not measuring the effectiveness of risk models. How can we say one model is better than the next without looking at our past performance? How can we as a security industry solve this problem? The academics haven't solved it. If you're interested in this topic, check out Hubbard's blog. I found it very insightful. One quick quote out of context from a blog article regarding quant's superiority, "Some evidence exists..." Oh the irony. I know someone can measure that if the value is there :)


All of us will have to continue advancing, sharing, and measure our efforts to improve how we can effectively help the business manage risk.


Good stuff.


Share on Facebook
Share on Twitter
Please reload

Follow Us

I'm busy working on my blog posts. Watch this space!

Please reload

Search By Tags