Monday, September 25, 2006

Rant: arithmetic operations on ordinal numbers

In virtually every discussion of computer|network security and asset protection, people trot out a risk equation on the lines of:

Risk = Threat x Vulnerability x Cost

This seems brain dead to me. Risk is the expected monetary loss from an event. This is a little better:

Risk = (Impact of an Event) * (Probability of an event)

Let's look at these factors. The Impact can have a dollar value associated with it, which can be more or less successfully generated by looking at replacement cost, revenue loss, etc.

The other factor, Probability, is going to be one of two general levels of accuracy. In some cases, you can know the probability of an event is one (that is, certain). You can be certain that an unpatched Windows file server exposed to the internet will be violated, probably within 2 hours.
http://isc.sans.org/survivalhistory.php In all other cases, you are pulling a number of the air. Or out of your ass. I've actually read a web publication that claimed to assess earthquake frequency and felt it could do something with that data in a risk equation. I don't buy it. But on to the rant.

Usually, the Risk Equation is done with qualitative factors, for example, at

http://www.sans.org/reading_room/whitepapers/auditing/1204.php , in section 2.2.4 on page 4

The author describes "Qualitative Risk Defined Mathematically".

Relative Risk = Asset Value x Vulnerability x Threat

To the author's credit, there is no actual attempt at doing math. But I have seen (and, at gunpoint, participated in) security assessments where these factors are assigned numeric values. So for example, a file server might get a 4 on a scale of 1-5. A vulnerability guesstimate would be, oh, 3. (But again, that number is pulled out of the air or wherever. YOU DON'T KNOW how vulnerable an OS is. Is there a Zero-Day attack employed by the bad guys? You either are, or are not vulnerable. I don't know which is the case. And neither do you. The best you could do is a qualitative ranking based on history, which is of unmathematical accuracy when predicting future performance. This ranking could be useful in thinking about what platforms are used for which purposes, but it should revolve around the skill level required to successfully compromise the assset. For example, "This is unpatched - Vulnerability = 5. This is patched, but the OS has a monthly patch cycle so it's almost certain that holes exist which haven't been found by the good guys - Vulnerability = 4. This OS has had one remote root in the default install in 6 years, we'd have to posit an unknown vulnerability in the absence of any history of published exploits - Vulnerability = 1")

Where these things go sour is when you multiply rankings. ( Impact = 5 ) * (Vulnerability = 5) = (Risk = 25) BZZZT!!!!

Ichiro Suzuki had the most hits in Major League Baseball in 2004. Ranking = 1

He was (I'm making this up) the 5ooth tallest guy in the League (MLB players tend to be tall). Ranking = 500.

1 * 500 = Nothing. Nothing real can be generated from multiplying two rankings together.

Rankings are ordinal numbers. You can say that 1 is higher|lower than 5. You can't say that it is 5 times better|worse. (In pro sports, being champ, #1, is INFINITELY better than #2.) You can't say it is 4 better|worse. You can't infer any precise degree at all.

So: don't multiply ordinal (ranking) numbers. Make a matrix, sure. It probably is useful to rely on your subjective evaluation of where an asset fits (this has an impact or value of "9", that's a "3"). Then make a matrix of impact vs. vulnerability or whatever, and remediate accordingly. But DON'T use bogus math to drive decisions. ("This 4 x 4 = 16 is greater than that 5*3 = 15")

Now, without making up fairy tales about infinitely skilled attackers and such, you can generate some actual data for security performance metrics. Richard Bejtlich (who though way smarter than me (and probably you) is guilty of doing math on ordinal stuff in this post: http://taosecurity.blogspot.com/2003/10/dynamic-duo-discuss-digital-risk-ive.html) suggests the way to get real metrics on useful subjects is to do timed pen-testing and the like. Did it take longer for a skilled|unskilled team than last year? In other words, don't measure your team members' shoe sizes, look at the scoreboard! Here's the post: http://taosecurity.blogspot.com/2006/07/control-compliant-vs-field-assessed.html

This is hard, and expensive. But if you want useful metrics, it's what you do.

0 Comments:

Post a Comment

<< Home