It's been a fun CSO tools series so let's close it with a bang. Number 7 is the ever intimidating operations scorecard. I've only seen a handful of actual working scorecards throughout my career and each were different. The lack of standardization is acceptable at this point in the maturity of security measurement and likely beneficial since everyone has different motivations, audience comprehension, and access to information. As mentioned in the balanced scorecard post, the most important step is to just get started and build or advance your culture of transparent and accountable IT services.
The operational scorecard communicates trends and performance across your operational services. It's a more tactical view that feeds the balanced scorecard. While lower-level, it still only contains business relevant metrics that must pass a few tests. To start, each metric must:
1. have a defined and communicated target.
2. pass the "so-what" test i.e. a non-technical person can understand what it means to the business.
Of course you need automated, repeatable, etc. criteria. Much has been written on how to define a metric. I think Andrew Jaquith has the best book on the subject but a book (or a blog post) can only take you so far. The key is to take action. I assume you've read the books and visited securitymetrics.org so I'll focus on areas I think are under served. Obviously security metrics are in the infancy stage. We have plenty of resources to tell us what kinds of areas to measure, however we have little showing us how to collect and report the information.
The I don't have enough data problem.
I bet you do. There's no chicken-egg dilemma. The key is to start small and demonstrate value to justify additional resources. As you pick your metrics, here's some additional tips I don't remember reading anywhere. First, your metrics should tell a story. Work backward from the desired outcome and pick metrics in the following business-relevant "story lines." The first two are written about the most:
Value to business: areas where your team contributed to revenue. This may be thin but keep refining till you strike a vein e.g. % of priority business initiatives with security involved at design phase. If outsourcing is part of your business, vendor management and assessment metrics can go here. Don't be tempted to create a metric to highlight a win. Save the specific anecdotes for your quarterly meetings.
Reduce impact to business: # business impacting incidents, severity of impacting incidents, MTTR, $ reduction of fraud, % of customer turnover due to security.
Efficiency: Take a minute to show off or at least set expectations. Readers need to know you're valuable and thrifty e.g. avg days for access certification, hours to provision, % roles with automated profiles, % processes with SLAs, % processes within SLA, even % processes with defined RACIs.
Control posture: I disagree with folks that say ops metrics e.g. % devices managed for security, don't pass the so-what test. It's our job to help stakeholders understand the relevance of security controls e.g. managed devices reduce disruptions and clean up costs. One of the emotions you want readers to internalize is "those IT security folks got it covered."
But I still don't have the data...
Yes you do. Do you scan your end-points and servers? Do you compare your scan inventory with the spreadsheet or cmdb from IT? Do you participate in incidents? Administer access? Interact with people outside IT? Of course. The key is selecting the relevant metric and communicating it properly. Heck, you could have an early metric simply measuring % of metrics with established baselines and targets or % of controls with operational metric. You only need a few to start.
Okay, what's next?
Now that you're fired up, I'll expand on the three basic steps:
Data Entry & Management
Step One: Design
I haven't seen a one-size fits all repository of metrics. Below are some of the metrics in our Metrics Manager repository.
A quick word about organizing metrics. We currently use a three tiered structure. A category that contains a collection of metrics. Each metric can then have separate series of data. A series can track remote locations or different business units e.g. if you'd like to measure control performance for central IT and remote offices separately.
Step Two: Data Entry & Management
As with all the CISO tools, you can use spreadsheets to get started (I used to). There are also tools out there (like ours) so you really have no reason to delay! I need to emphasize a tip I mentioned earlier. Each metric should have a baseline and target. Without them, it's just a statistic. When you define where you were, are, and want to go, you tell a story. Another benefit from these three datapoints is the ability to create an expected value at any point between them i.e. how fast will you hit the target, or are you already there and just need to optimize. This can get tricky in spreadsheets as the months roll by, but we did it at msft and wamu so I know you can do it too.
Another tip is to let the metric owner drive the baseline and target definitions. It's empowering to be able to set your own bar. It's also motivating to achieve the selected targets. As long as targets are realistic, they make great performance review evidence.
One more note related to metric design. Many of your business relevant metrics will be a combination of tactical metrics. For example, % accuracy of inventory is a calculated field from what you enumerate vs. what IT tracks. You'll either have to build a business intelligence system, maintain many spreadsheets, or swivel chair the numbers from tactical outputs into your metric tracking tool. I do know of one IT shop investing in a BI platform to integrate source feeds, apply business logic, and present relevant results. I wish we were all there! If you don't have the resources for BI, check out the middle ground of summarizing tactically, then manually reporting the relevant metrics.
Also, set aside a place to capture notes for each metric recording period, especially for calculated fields. Dependencies and surprises happen. It doesn't mean you're failing, you just need to explain why you're green one quarter and blazing red the next. No one said transparency is easy, just valuable.
Here's a shot how Metrics Manager tackles data entry and management. Note the ability to assign a baseline (red dot) and target (green dot). You should also allow for multiple targets to set expectations on the pace of progress. The below example shows linear expected progress. In real life this could be flat for six months then jump up. Or your baseline could already be at your target level and you're simply tracking progress. The key is to think of a metric target as an "acceptable risk" definition for that point in time. As you reach your targets you can re-evaluate if it's optimal for the business.
All is for not if you don't have the eye candy to communicate your story. Yes, I feel so strong about this we invested six months in our first applications to make the tasks of organizing and presenting data easier and more effective. You have to draw the future picture. It's important to define what success looks like and work toward it, otherwise you'll never get there. Defining what success looks like is also self-fulfilling. If you write down your optimal scorecard but only have two metrics started, you have another nice story to show how additional investment in security (to collect and measure evidence) will translate into better risk decisions for the business.
I have a few goals for metrics reporting:
Again, you can do this in spreadsheets it just takes a bit longer. Here are a couple screen shots from our tools to get you started.
Group table summary and individual drill-down. Note we also track if you're trending toward or away from your expected progress.
The overall metric roll up always sits on top of the table. We call ours the "Master Security Index." This shows the weighted average % distance between actual values and their expected progress values. Thus, you get a high level view if security is trending toward or away from expectations. If the overall index doesn't represent your story, you can drill down to expose the areas of concern. You can also exclude specific metrics for what-if scenarios.
I hope this post inspired you to start or advance how you collect evidence and communicate your progress. It's dirty work translating tactical statistics into relevant metrics. However the pay back in credibility and demonstrating the value security brings to the business is well worth it. The Ops folks have their throughput and uptime metrics, show them what security can do.
One item I forgot to mention above is the added accountability and pride your team will suffer through. You might even see all five stages of grief here... Every IT security department I've seen has areas they know they should be doing better. It really hits home when you broadcast the numbers. Please don't hide these. Celebrate them! What a great opportunity to take a bath and justify why improving your posture is good for the business. If you still don't get the resources to raise the bar, no problem. You now have another way to show what acceptable risk is for your business.
You'll also be challenged with your team being "too busy" to calculate and enter in their monthly metric data. It's important that measurement is a planned activity, not another task thrown on the stack. If you're too busy to measure what you're doing, you may be doing the wrong things. In 6 months when you look back and see the trends, you'll love the fact you have an evidence based story to tell. Count on it!