Risk: Models, Frameworks, Diagrams, & other Unicorn-lair maps

Risk modeling, while it sounds specific, is actually super-contextual. I think my own perspective on the topic (the different types of modeling, what they are good for) was best summed-up in a paper/presentation combo I worked on with Alex Hutton for Black Hat & SOURCE Barcelona in 2010. Probably video from Barcelona is the best reference if you want to look that up (yes, lazy blogger is lazy), but let me summarize by the (from my perspective) three general purposes of risk models:

  • Design: Aligned most with system theory. The models try to summarize the inputs (threats, vulns, motives, protections) and the outputs (generally, loss and in some cases “gains”) of a system, based on some understanding of mechanisms in the system that will allow or impede inputs as catalysts/diffusers of outputs. Generally I would lump attack tree modeling and threat modeling into this family, just a different perspective on a system as a network architecture or design of a protocol, software, or network stack. Outside of risk/security, a general “business model” is equivalent, which attempts to clarify the scope, size, cost,and expected performance of the project.
  • Management: Aligned most with the security/risk metrics movement, and (to some extent) aligned with “GRC”-type work, management-focused risk models are set-up to measure and estimate performance, i.e. to answer a questions about “how well are controls mitigating risk” or “to how much risk are we exposed”. One could think of the output of the design phase being a view as to what types of outcomes to expect, and then the management phase will provide a view on what outcomes are actually being generated by a system/organization. Outside of risk/security, a good example of a management model is the adoption of annual/quarterly/ongoing quality goals, and regular review of performance against targets.
  • Operations: Operational models are a different beast. And my favorite. Operational models aren’t trying to describe a system, they are embedded into the system, they influence the activities taking place in the system, often in real-time. I suppose any set of heuristics could be included in this definition, including ACL’s. I prefer to focus on models that take multiple variables into consideration – not necessarily complex variables – and generate scores or vectors of scores. Why? Because generally the quality of decision (model fit, accuracy, performance, cost/benefit trade-off) will be more optimized, i.e. better. Outside of risk/security, a good example is dynamic traffic routing used in intelligent transport systems.

“Framework” is another term that I’ve heard used in a number of different ways but it seems to really be an explanation of a selected approach to modeling, and then some bits on process – how models and processes will be applied in an ongoing approach to administer the system. Even Wikipedia shies away from an over-arching definition, the closest we get is “conceptual framework“: described as an outline of possible courses of action or to present a preferred approach to an idea or thought. They suggest we also look at the definition for scaffolding: “a structure used as a guide to build something” – (yes, thank you, I want us to start discussing risk scaffolding when we review architecture, pls)

attack_anatomyAnyway, I’m glad to explain a bit of this on risk models but it is all a preface. The purpose of explaining this is to describe something that’s happened to me a few times when I’ve tried to model out a system using a Design/System approach, which  – let’s call it clumping. To the right you will see a attack tree diagram developed for e-commerce fraud, summarizing a one long branch of the tree.

If one was attempting to sketch-out an entire attack tree diagram for e-commerce fraud, the first thing that you notice is that the tree is taller than one would like. What I mean by this is that, as a defender, I’d actually prefer the tree to be more closely connected to my platform surface, because that’s a surface that I can instrument and control. Unfortunately, if you look at this tree, it’s “rooted” way outside a controllable platform space, back in end user (or worse, friends of end user) client-side systems, email inboxes, and (social) brains. Not only are those systems disconnected from our platform surface, they’re effectively invisible to us.

The other thing that we notice about the tree is that it gets too branchy (wide, or brushy), from the start and all the way down the tree. Why? Repetition. Because to draw the true tree, we need to consider all the attack vector branches and string them together in the correct chains from inception to completion, however attackers are by nature opportunistic and our models are by nature systematic. Meaning? An attacker can try to phish for credentials by asking for them with an obfuscated e-mail address, or phish using a phishing site, or capture credentials more directly via malware already resident on the machine, or phish the users to trick them into downloading malware that is then used to capture credentials, and…or…. -> whatever, the point is the attacker is setting up traps to get access to legitimate credentials. And if one vector doesn’t work they try another, and they can layer and redirect eleventy-billion times. Do we really have time to draw out a decision tree style attack tree that captures each version of this part of the attack in order to defend against it? No, the attack tree would get out of control so instead of each variant kicking off its own branch, we can – still list them – but clump them together as one phase, one section of an attack tree resulting. So I’ve simplified our tree from huge repetitive combinations of long skinny branches into a stubbier but more manageable chain of clumps: Tech Exploits } Credential/$ Fraud } Monetization.

Jack Whitsitt is working on the NIST Cyber Security Framework project and recently asked for some feedback on this happenin’ jazz medley of rainbow fruit flavors, which he’s edited-up quite a bit resulting in this. My feedback is, as rainbow fruit-a-licious as the first version is, I like it better. I get totally lost in the MVC model – and models/frameworks should help us orient, figure out how a system works, or at least what we can see going in and coming out of a system.

Here’s what I take away from v1 (aka RIsk Reasoning Rainbows, aka R3): To me this is a non-standard view of an attack-defend model, with each node in the decision tree blown-out with the different optionality available both to the defender and the attacker. The atomic unit of the decision tree node is an “event”, with each of the options could be used to describe the edge configuration from the perspective of the attacker and defender. This is a non-standard view bc in most threat models I’ve seen, the nodes and edges are layered into a “flow” that would be endemic to the system, meaning, there is some order to an attack and a corresponding defense, i.e. AttackA -> DefenseA -> AttackB -> DefenseB, whereas Jack’s R3 sets up a system versus system dynamic a la Spy vs Spy, i.e. {the set of all Attacks including tools, process, vulns, motivations} versus {the set of all IMPACTS, assuming defenses or lack there-of}. Defense is left for designers: to set-up control objectives, prioritize targets/impacts, and clarify risk strategy (prevent, accept, respond, recover).

The scenarios don’t flow, but once I understood the clumping, I think if the optionality is pulled out of a grid it will be easier for the reader to realize the only part of the model meant to be read left to right are the headers. Also, it would be useful to have a node/edge example that incorporates both potential risk (attack/impact) and potential mgmt strategy (defend, accept, etc.). On the attack side, this is a good MECE summary, and was probably a relief to finally get on one piece of paper. It is still challenging to understand how the framework/model is actionable for a defender. The attack surface (including set of potential impacts) still a tall and repetitively wide attack trees aimed at it, how should we map out the unicorn lair (i.e. is there a “there” in there)?

The key I think we will find is that the optionality we catalogue in the design phase (both inputs and outputs to the system being designed) needs to be translated better into decision points (where controls will be built/operationalized). Not just the thresholds/appetites that the management model needs, but the TRANSDUCERS within the system that convert inputs into outputs. That’s what we need to build. Not an assessment scaffolding but the actual build work to be done on the system. That’s the framework. Connect those dots and we will be on the trail to the unicorns.

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.