Forget C-I-A, Availability Is King

In the traditional parlance of infosec, we've been taught repeatedly that the C-I-A triad (confidentiality, integrity, availability) must be balanced in accordance with the needs of the business. This concept is foundational to all of infosec, ensconced in standards and certification exams and policies. Yet, today, it's essentially wrong, and moreover isn't a helpful starting point for a security discussion.

The simple fact is this: availability is king, while confidentiality and integrity are secondary considerations that rarely have a default predisposition. We've reached this point thanks in large part to the cloud and the advent of utility computing. That is, we've reached a point where we assume uptime and availability will always be optimal, and thus we don't need to think about it much, if at all. And, when we do think about it, it falls under the domain of site reliability engineering (SRE) rather than being a security function. And that's a good thing!

If you remove availability from the C-I-A triad, you're then left with confidentiality and integrity, which can be boiled down to two main questions:
1) What are the data protection requirements for each dataset?
2) What are the anti-corruption requirements for each dataset and environment?

In the first case you quickly go down the data governance path (inclusive of data security), which must factor in requirements for control, retention, protection (including encryption), and masking/redaction, to name a few things. From an overall "big picture" perspective, we can then more clearly view data protection from an inforisk perspective, and interestingly enough it now makes it much easier to drill down in a quantitative risk analysis process to evaluate the overall exposure to the business.

As for anti-corruption (integrity) requirements, this is where we can see traditional security practices entering the picture, such as through ensuring systems are reasonably hardened against compromise, as well as appsec testing (to protect the app), but then also dovetailing back into data governance considerations to determine the potential impact of data corruption on the business (whether that be fraudulent orders/transactions; or, tampering with data, like a student changing grades or an employee changing pay rates; or, even data corruption in the form of injection attacks).

What's particularly interesting about integrity is applying it to cloud-based systems and viewing it through a cost control lens. Consider, if you will, a cloud resource being compromised in order to run cryptocurrency mining. That's a violation of system integrity, which in turn may translate into sizable opex burn due to unexpected resource utilization. This example, of course, once again highlights how you can view things through a quantitative risk assessment perspective, too.

At the end of the day, C-I-A are still useful concepts, but we're beyond the point of thinking about them in balance. In a utility compute model, availability is assumed to approach 100%, which means it can largely be left to operations teams to own and manage. Even considerations like DDoS mitigations frequently fall to ops teams these days, rather than security. Making the shift here then allows one to more easily talk about inforisk assessment and management within each particular vertical (confidentiality and integrity), and in so doing makes it much easier to apply quantitative risk analysis, which in turn makes it much easier to articulate business exposure to executives in order to more clearly manage the risk portfolio.

(PS: Yes, I realize business continuity is often lumped under infosec, but I would challenge people to think about this differently. In many cases, business continuity is a standalone entity that blends together a number of different areas. The overarching point here is that the traditional status quo is a failed model. We must start doing things differently, which means flipping things around to identify better approaches. SRE is a perfect example of what happens when you move to a utility computing model and then apply systems and software engineering principles. We should be looking at other ways to change our perspective rather than continuing to do the same old broken things.)

Archives

About this Entry

This page contains a single entry by Ben Tomhave published on May 16, 2018 11:37 AM.

Measure Security Performance, Not Policy Compliance was the previous entry in this blog.

The Quest for Optimal Security is the next entry in this blog.

Find recent content on the main index or look in the archives to find all content.

Creative Commons License
This blog is licensed under a Creative Commons License.
Powered by Movable Type 6.3.7