Posts Tagged ‘Decision Making’

Over the rainbow and through should/could/woulds, to the Echo Chamber we go

Monday, October 15th, 2012

I just read a very curious blog post from titled “Somewhere Over The Rainbow – A Story About A Global Ubiquitous Record of All Things Incident”. Thanks to Jelle Niemantsverdriet (@jelle_n) for calling my attention to it.

There are many reasons I find this post so curious, one being that it says several times that the purpose of collecting incident data is to convince management that they should spend money. Furthermore, the first paragraph closed with this:

“…even if we have accurate data, exactly how useful is the data to us in the formulation of risk management decisions, and third, even if the data is accurate and useful, did we even need it in the first place?”

Now, I may be reading that incorrectly, but it seems to question the utility of data in making risk management decisions. Let’s review for a sec. Decisions made under high uncertainty are more likely to result in less-favorable outcomes. In general, reducing uncertainty enables better decision-making. Uncertainty is a lack of perfect information. Accurate data improves our information and therefore reduces uncertainty. Hence, data improves our ability to make decisions and manage risk.

Now, it might be true that incident data may convince management that they need to invest in security, but only because it reveals that current levels of risk are outside of tolerance and must be treated through control expenditures. For some the benefit of collecting data may be to identify areas where there is overinvestment in security. In other words, some desirable outcome might be a byproduct of collecting incident data, but it’s not the main goal. The goal is reducing uncertainty and we do that by improving the feedback.

I also find the second-to-last paragraph odd as well; perhaps even more so than the above assertion.

“What we’re really concerned with here is trust. The proponents of a big data repository of incident big data would have it that we need such a thing because the powers that be don’t trust us. When we propose a mitigation of a particular risk, they don’t trust our advice….By looking into the history of all incidents we’re setting a dangerous precedent, and rather than enabling trust, we’re making the situation even worse.”

Wha? Again, perhaps I’m misunderstanding. My reading of this is that it’s basically saying “collecting data is dangerous because it might lead to decision-makers trusting it more than us experts.” Or, more colloquially, “this sciency stuff might endanger our influence as shamans.” And that sentiment is quite simply, based in ignorance of what data is and how we can use it. The point of gathering data is not to contradict or circumvent experts – the point of gathering data is to build better experts. This is not an adversarial relationship, it is simply a method of improving our information through feedback. There is already all kinds of problems with so-called expert advice in the security industry, and I long for the day when our opinions are supported by and aligned with “a global ubiquitous record of all things incident.”

If you don’t long for that day, perhaps you’re holding on to your untested opinions a bit too tightly?

Breaking down the wall of words (or at least hanging some pictures on it)

Thursday, March 8th, 2012

This past week, several RISK Team members descended upon the lovely city of San Francisco for the annual RSA/Mini-Metricon/B-Sides pilgrimage.

On Monday, we did a quick lightening talk at Mini-Metricon on some of the things we’ve been doing lately with respect to attack modeling and analysis. If you missed it, you can check out Appendix A in the soon-to-be-published 2012 DBIR for a recap.

At B-Sides, we presented a talk titled “Your IR Team: More than Firemen and Maids.” The central principle was that organizations should use their incident responders for more than putting out fires and cleaning up messes. Instead, also think of them as generators of valuable data that can inform security decision-making. In working with both the IR and risk
management sides of many organizations, we so often find that these two groups do not share information to the degree they could/should. Decision-makers constantly decry the lack of useful security data at their disposal, yet they have a “treasure trove” right under their noses. We’ll post the preso once the B-Sides crew has the chance the get it up. Until then, you can check out the hand-sketched version here.

Regarding that talk, one attendee had the following to say:

“By my count, the fourth B-Sides SF talk this year to heavily feature
statistics and suggest setting metrics. The presentation made an argument
for formally tracking and classifying incidents, for instance using the
VERIS framework. The talk was quite compelling and did a good job
illustrating how incidents can be charted and visualized.

Unfortunately, when I visited the VERIS wiki I found it rather
disorganized. To me, the wiki doesn’t do a good job of communicating how
the framework can be implemented and throws up a wall of words rather than
diagrams and practical implementations. In all fairness it is under
construction, and does give some example, but more concrete tools would be
welcome. If someone would release a spreadsheet template or simple app
(Python, Ruby, etc) to jump-start organizations on their incident
classification, that would be a huge public service.”

This is a fair critique, and not the first time we’ve heard these sentiments expressed. We definitely realize we have some work to do to make VERIS as usable as we know it can be. To start, we are updating the wiki to reflect the latest version of the VERIS Community framework. It is still, however, a “wall of words.” Over the last several months, we’ve made strides to change that. We’ve created an XML schema of the VERIS framework (currently in beta), a UML diagram to better help visualize the structure, and have begun conversations about open source tools. We’ve also realized that if we’re really going to achieve our goal of widespread VERIS adoption, we need to up our development and support of it. We’re making hires and prioritizing efforts to do that.

Bottom line – we hear you. We’re working on it. If you’d like to give us some input on what we can do to make VERIS more useful to your organization, please let us know. If you’d like to use and provide feedback on the beta XML schema, we’ll be glad to point you in the right direction. For either of these, or to register any other helpful criticism, we’re all ears at veris@verizon.com or @verisframework.

Security decision methods poll Results

Monday, October 12th, 2009

A couple of weeks ago, I wrote a post on how we in the security industry make decisions. After a bit of waxing philosophical, I proposed a list of decision “methods” I regularly see in use among organizations. I also created a small survey (that contained a few additional methods) to capture your experiences for comparison. The response was not overwhelming by any stretch but the results are below (click the image to make it bigger).

Decisions survey results_small

(more…)

Security Decisions – How do you make them?

Monday, September 28th, 2009
As a student of both the fields of Information Technology/Security and Management Science (http://en.wikipedia.org/wiki/Management_science), I often find myself looking at security issues through a “decision-oriented” lens. For the most part, these two disciplines make good bedfellows – especially when one considers that engineers dominate the Information Security field. Please don’t misinterpret this; I have a healthy respect for and advocate our need of engineers (I’ve even helped teach and graduate some of them). However, not all of our problems are engineering problems and I do believe that our ability to truly manage information risk is hindered by a shortage of input from other disciplines (though I’ve seen at least some improvement in recent years).
One area where the engineering and management mindset clash is in decision-making. The engineer asks “What do I need to know to precisely formulate all factors in this decision?” while the management scientist asks “What do I need to know to make a good decision?”. In such matters, I side heavily with the management scientist.
The obvious application of this is in evaluating potential security initiatives or projects (“Should we do X, Y, or Z?”). In most cases, it is impossible to precisely formulate all factors in the decision, so we abandon the “scientific” route and revert to some other method of making it (see below). This is where our predominantly engineering mindset hurts us. Instead, we should realize that organizations have always made decisions using varying amounts of information of varying quality. Our dilemma is not new. Valid and vetted approaches exist for structured decision problems with an abundance of precise data and also for unstructured problems with sparse amounts of “fuzzy” data. They are out there and eagerly waiting for us to apply them to problems in our domain.
Ok, I’m off the soapbox. The main goal of this post is to ask how your company makes “Should we do X, Y, or Z?” decisions. I’ll start the conversation by listing the methods I see used most often. In doing so, I make no judgment on any method’s ability to support good decisions (though it’s clear some have more value than others).
The “Adamant Auditor” method: You’ve been here. The 22 year old kid shows up 3 months out of the university with his checklist etched in stone. He darn well better be able to check off all those boxes or you’re toast. “But if X does Z and Y does Z, then X=Y… and we’ve done Y” you argue only to receive blank stares. Good luck with that. Unless you can build a credible risk-based argument, you might as well just do X like he says.
The “Peer Pressure” method: This is the grown-up equivalent to doing what the cool kids do “Peers X and Y are doing Z, so we should too” is the justification here. It might be that X and Y have their act together and are great role models. Then again, they might think that alcohol, blindfolds, and a game of high-speed Chicken make for a great Friday night. Remember what your Mama said – “If so and so jumped off a cliff, would you?”
The “WIBeHI” method: If you’ve ever used anything that sounds remotely like “Wouldn’t It Be Horrible If X happened, therefore we should do Y” to justify a security initiative, then you’ve used this method. The potential worst-case scenario (and often some extra FUD for good measure) is the main decision criterion in this approach.
The “Guru Guidance” method: Every organization has its guru and every guru has his opinion. Just ask him. It might be that nobody understands the technical justification behind what they’re recommending, but he knows his stuff, right? Right?
The “Poll the Panel” method: Often called the “Delphi Method” but I’ve never thought the name very fitting. No journey to a mystical oracle with secret knowledge is required; you simply gather your smart folks and get them to come to a decision. The assumption is that decisions made by many are better than decisions made by one.
The “Pet Project” method: Perhaps it was the advertisement in that magazine on the plane. Maybe that analyst report. Who knows why your boss wants that project so badly, but its clear she does. And in this job market, who’s going to argue? If you can get it done while also squeezing in something with actual benefit, there’s a chance you can still put a mark in the Win column.
My tone here is obviously facetious but I am quite serious that I believe these methods (or some form of them) account for the majority of security decisions made in most organizations. Is this your experience as well? We’ve put up a quick, one-question poll on the topic here and would love to hear from you (we’ll share the results later). If any of these methods resonate or if you have some to add, please chime in.

As a student of both the fields of Information Technology/Security and Management Science, I often find myself looking at security issues through a “decision-oriented” lens. For the most part, these two disciplines make good bedfellows – especially when one considers that engineers dominate the Information Security field. Please don’t misinterpret this; I have a healthy respect for, and advocate our need of, engineers (I’ve even helped teach and graduate some of them). However, not all of our problems are engineering problems and I do believe that our ability to truly manage information risk is hindered by a shortage of input from other disciplines (though I’ve seen at least some improvement in recent years).

One area where the engineering and management mindset clash is in decision-making. The engineer asks, “What do I need to know to precisely formulate all factors in this decision?” Meanwhile, the management scientist asks “What do I need to know to make a good decision?” In such matters, I side heavily with the management scientist. (more…)

Economic crisis could dramatically improve security in 2009

Wednesday, December 3rd, 2008

No – it’s not a typo, and, as far as I know, I haven’t lost my marbles (yet) either. The title is intended to read exactly as it appears. I suppose some explanation is in order…

If you keep abreast of what folks in the security industry are talking about with any regularity then you’ve probably read something lately about how the current economic crisis might affect corporate information security. For instance, layoffs could result in the loss of key security personnel and/or trigger retaliation from bitter employees. Others are worried that slashed budgets won’t allow security programs to buy what they need to buy, or do what they need to do. The list goes on.

(more…)