Someone pointed me to a Risk Management Insight blog post titled "Lipstick on Pigs." It wasn't targeted at Risk Communicator specifically but included a screenshot so I jumped in the thread. Overall it's a fine post. It was also picked up by a Securosis post. Rothman's summary made my title.
I'm not sure why but when some people see a picture of risks they think you're trying to snow them over. I'm not criticizing Jack or others, just something I've observed over the years. One hypothesis is many risk models bury their justification down a couple layers into the algorithm requiring the reader to question the heck out of it or become an expert. This is what happened to me when I tried to quantify risk in my past. By the time everyone was on the same page the meeting was over. Or we never got on the same page because we couldn't agree on the $ value of something...
Back to the Lipstick post:
First off, we try to set expectations that Risk Communicator is optimized to prioritize risk drivers for portfolio planning. It does have a Detailed Risk Model that can be used for tactical, asset specific assessments. However many features to conduct tactical assessments are omitted on purpose e.g. threat and asset catalogs, risk grouping. We'll tackle tactical assessments later on in our roadmap. We put the Risk Details widget in Risk Communicator because some teams want to break down the risk statement and paste in specific evidence for vuln attributes, control effectiveness, specific exposures, and asset types.
Thus our goal is to provide the capabilities for you to communicate the right balance of information to help your business predict and manage future losses. Someone in the comments section made a great point - it depends on your decision makers. If they need to understand the capabilities of a threat agent, great, include the factor in your analysis. If your decision makers simply need to know the summary evidence, that's fine too. Risk models are simply communication vehicles. Their value is consistent questions, answers, and organization of evidence. Thus, garbage in, garbage out. Or if you do your homework, evidence in - decisions out!
At risk of beating the dead horse, no risk tool contains artificial intelligence or thinks for you. In fact, I think any model that doesn't have a simple path back to your evidence reduces your ability to communicate. If you want to quantify risks, assign values to the categories where risks sit. For example, we provide guidance how to associate cost and other impact drivers to the impact range across 1-10. Same for the frequency of attacks on the likelihood scale e.g. evidence pointing to an exploit within 1 year can be mapped to values of 8-10. It's your story.
It's important expectations are set correctly with tools like Risk Communicator. It's an efficiency tool. What makes Risk Communicator valuable isn't the risk model, it's the ability to navigate the work flow quickly, communicate with evidence readily available, and to automatically produce visual reports. Success is when you spend more time collecting evidence rather than formatting it.
Overall, I really like these discussions as they help us get better and help advance our profession one critique at a time. Bring it on!