Comments

That Guy December 18, 2018 7:58 AM

I like the ideas for the Organizational, Government, and International.

I would argue for keeping the traditional Layer 8 of “User” in addition to the new ones. No matter how good the Organization & policies, one user can subvert all of it. That’s why spearphishing works.

Clive Robinson December 18, 2018 9:19 AM

@ Bruce,

he makes real the old joke about adding levels to the OSI networking stack

They have been –all be it informally– added to at both ends for more years than I care to remember, which is why it’s often simply called the “computing stack” rather than the palindromic “ISO OSI” stack.

Like others I tend to see 8=User, 9=Managment appear to be solid fixtures with, “standards, legislation, Government, treaty, IGO” filling layers above.

I’ve mentioned it before see my “footnote [1]”, https://www.schneier.com/blog/archives/2016/09/brian_krebs_ddo.html#c6735120 and even “joked about the joke” https://www.schneier.com/blog/archives/2016/03/possible_govern.html#c6719461

That said the more interesting stuff as far as quite a few are concerned currently is “at the other end” on the “0.x” or “-layers” where the notable batch of “new” hardware bassed attacks or the “Gift that keeps on giving” are located.

The thing is that these “informal” stacks tend to vary, and I’m far from sure how you would bring them all together as a single stack.

So Peter Swire’s proposal was probably doomed way before he thought of it…

Phaete December 18, 2018 9:23 AM

I expect the users to adhere to a strict cybersecurity policy as much as they would adhere to the “Look both sides before crossing” and the “Don’t cross a red light”.

Enforce what can be enforced on a digital level, but assume the human level is going to err and build fencing around that to detect/minimise impact.

The teaching methods can make a small difference, but the underlying cause is the human condition.

wumpus December 18, 2018 10:02 AM

@phaete: “The teaching methods can make a small difference, but the underlying cause is the human condition.”

I think our best hope is that the red flags of “this company is controlled by idiots” become more well known. Blindly following “expire your passwords” (and similar password idiocy, including “insecurity questions”) rules in this day deserves both loss of business and ridicule.

Tatütata December 18, 2018 10:48 AM

What about zero or negative layers on the bottom end of the stack?

+1 : physical
0 : mathematical
-1 : theological (or whatever explanation do you have for the universe)

Denton Scratch December 18, 2018 11:39 AM

@Clive
“Peter Swire’s proposal was probably doomed”

Well, he started from the OSI stack, which was always an arbitrary description of a network infrastructure, and which in practice was made obsolete by the internet protocols within about five years.

How come people still mention OSI? Are these the same people that think CORBA is still relevant?

Clive Robinson December 18, 2018 4:45 PM

@ wumpus, phaete,

I think our best hope is that the red flags of “this company is controlled by idiots” become more well known.

Whilst that and “the human condition” are certainly part of the ICT Security problem they are also the part of most other endevors in life, many of which despite the apparent odds succeed.

Thus the question is why are they effectively negated in other endevors including physical security, but not in ICT Security?

As I’ve mentioned before ICT Security trys to hold it’s self up as a science or atleast one founded on logic and maths, thus proofs etc. The problem is that big grey pachyderm with big nose and ears trying to be invisable in the room despite the fact every one has to tiptoe around it as it fills almost the entire space is lack of usable measurands or any measurands at all…

If you can not perform a comparative measure of significance over time, which you mainly can not in ICT Sec, then you can not realy show what is working and what is failing. As the later is, as a probability way more likely, then the chances are, what ever you are doing, it is not likely to be working but failing.

Whatever a supposed adjudicator (reviewer/journalist) is going to see is in all probability going to be a success in an adherents (developer/salespersons) mind and more or less irrelevant to all others… Then those arguments that are effectively “I’m right because I say I’m right” would not work with mesurands of merit across a whole system. But such arguments work well in ICT Sec currently with no measurands.

And as is such with life the more money or publicity there is to be made the more strident that argument tends to get, especially when it turns into market share.

Worse perhaps is the “early on take up bonus”. There are failings in the simple economic models of markets that are to do with “distance costs”. That is under the physical and traditional telecoms models “you pay by the mile” that is your market price rises with distance which puts a boundary around your place of manufacture where someone still paying start up costs will be cheaper as they are that much closer to a customer (it’s also a reason for import tariffs). The Internet currently does not have these costs and even if it did the near zero cost of localised data duplication would render them ineffective.

Thus you have a number of “global” advantages not seen in more traditional markets in being first to market… Firstly you are “the only game in the world” which means those that need or want such a product have only you to go to. This gives rise to the advantage that for a very short while you have the market –all be it initialy small– entirely to your self, which gives you more time to grow with the market than potential competitors, which not only helps reduce comparative start up cost impact, providing you can keep the imputus going it will give you a near permanent market lead. Secondly as the market grows you are the market leader which makes it more likely that a new customer would come to you all other things being equal because they have more people to use the product with. Thirdly due to the way some information products work you get initial scaleability advantages due to natural law constraints[1].

There are basically two types of information products these days, those that are local to the user and those that are local to the product provider. The latter we tend to lump under the title “cloud”.

From a user privacy perspective the more local they keep data to them the easier it is to secure. From a service providers perspective the opposit would apply if they actually had an interest in keeping the remote users privacy. Which the number of press articles about data breaches would suggest to many that the service providers consider “lip service” sufficient…

When you consider service providers a new entrant in an existing markey all other things being equal has to find a way to do things in a less cost per user way so that they can scale up faster. There are two ways to do this firstly technology advantage secondly revenue advantage.

In most cases a new entrant will not have a technological advantage, as market leaders in any given domain also tend to have advantages when it comes to accessing new technology.

Which leaves “revenue advantage” there are two ways firstly increase the profit per customer, secondly reduce the cost per customer. In both cases security is going to suffer. That is either the data held on a customer is considerably larger thus a more promising target for attackers or the “sunk cost” of security measures is cut back beyond the point it is effective thus they become an easier target to attack.

The problem with security is it is seen as either an expensive defence expenditure where the only way you can tell you are not spending enough is you are breached, or worse a non profit item that is an expensive waste of resources that inhibit lower cost thus more profitable working proceadures and practices. But worst of all in startups it’s generally not even a consideration as it delays “getting to market” thus builds a tsunami of technical debt that will in all probability not be payed off before the venture goes bust or is bought up by another organisation.

[1] The basic natural limit to speed of processing at the user level is not “heat death” and similar you get told about but distance due to the speed of light. When you perform a search, generally the distance over which the user request is sent and the answer returned is unimportant, what is critical is the distance between the searching CPU and data to be searched on storage devices. Thus having that function as dense as possible is where the cost savings on user transactions can be made. This remains true for given user types however users in one region are not entirely like those in a different region when it comes to searching. Especially on a global basis, thus as growth rises there is a point where splitting databases up by language or regional information becomes worth while to manage network issues and improve reliability thus availability.

Clive Robinson December 18, 2018 5:19 PM

@ Denton Scratch,

Well, he started from the OSI stack, which was always an arbitrary description

All “stacks” or “models” are arbitrary descriptions, based usually on the original designers focus.

In that regard they are just like APIs or other “levels” in segregating software or hardware designs into managable component parts.

As arbitrary in fact as a very very sound safety critical software development rule that “No subroutine should be more than one and a half screens or printout pages in length”. In short chop of the setup and clear down and the real subroutine logic fits in a single view the developer can see all in one go, so twenty to fifty lines at most. Which is also why you get the “no comments in the code” and similar style arguments. Which actually have a side effect of “programing tricks” where programers try to get “more functionality per line” one such is using the AND or OR rules to do selective execution[1]. The “Obfuscated C Code Contest” is for fun and ammusment, not an ideal to work towards in code that is used everyday to raise revenue. So if you want to get more done per line of code use a higher level High Level Language (HLL)

[1] As programmer’s that work on both the ISA and HLL sides of the great divide know most HLL tricks actually generate less efficient execution code at the ISA level[2], so the HL programmer’s who do it are generally deluding themselves and often opening up what will become attack vectors of various types.

[2] Compilers and interpreters can only be “so clever” and HLL code tricks just reduce the chances of optimization, or only work with one compiler or one interpreter thus not portable or often reusable at a later date.

gordo December 18, 2018 5:43 PM

@ Clive Robinson,

If you can not perform a comparative measure of significance over time, which you mainly can not in ICT Sec, then you can not realy show what is working and what is failing. As the later is, as a probability way more likely, then the chances are, what ever you are doing, it is not likely to be working but failing.

As so, this quote is apt:

The measure of success is not whether you have a tough problem to deal with, but whether it is the same problem you had last year.John Foster Dulles [President Eisenhower’s Secretary of State during the formative years of the military industrial complex]

Though it tells me enough by itself, the above quote from Dulles is the epigraph for this article:

Margin of Safety or Speculation?
Measuring Security Book Value
Dan Geer and Gunnar Peterson
February 2014

https://www.usenix.org/system/files/login/articles/12_geer.pdf

Phaete December 18, 2018 7:28 PM

@ Clive Robinson,

I largely agree with what you say, just have to add 2 things.

Thus the question is why are they effectively negated in other endevors including physical security, but not in ICT Security?

Regulation (good or bad) also plays a big role here.

The problem with security is it is seen as either an expensive defence expenditure where the only way you can tell you are not spending enough is you are breached, or worse a non profit item that is …

In my experience some businesses use risk analysis for expenditure of security vs loss when breached and have calculated that for them a breach costs less then the last 20% of ICT Security, giving them more profit and everyone up higher bonuses.

Clive Robinson December 19, 2018 3:31 AM

@ gordo,

Gunnar Peterson

I used to read and comnent on his blog 1raindrop quite some years ago now. He made a point of saying how most business security had not changed in around twenty years, with firewalls at the perimeter, and counting rejected connect attempts as a success measure…

I also read and posted to the Mandiant founders blog, I don’t know what rank he held in the airforce but he had a habit of following the “US existrntial threat” line a little to closely for my liking. He and Gunnar got into it over the economic policy that China was then following of reinvesting a large amount of what it had earnt from America back in America and thus owned a conciderable amount of US assets, thus had a degree of leverage on the US economy.

It was around the time of Banking Crisis One and I agreed more with Gunnar’s view point as it had more valid economic sense to it. I stopped visiting the Mandiant blog around then when it got more than a little to shrill for my liking. Gunnar’s blog started to change direction as well into an area that was not of much interest to me so… To be honest it’s been atleast a couple of years now since I saw his name come up anywhere…

Clive Robinson December 19, 2018 4:12 AM

@ Phaete,

… some businesses use risk analysis for expenditure of security vs loss when breached and have calculated that for them a breach costs less then the last 20% of ICT Security…

It’s part of the “might never happen” or “not gonner happen on my watch” thinking, that is becoming ever more prevelant in industry in general, and privatised utilities especially.

Essentially security like infrastructure maintenance is a sunk cost which has the “defence spending” problem attached. Acturialy your house will have a fire once every X years based on the normalisation of thousands of otherwise apparently random events. Should you get fire insurance?…

Well if you are planning to live out your life there or it has a significant value to you personaly then the answer is usually yes.

But if you are renting for only six months and your stuff is just “student stuff” then the answer may well be no.

And if you are a squatter then almost certainly no.

Officers of companies these days in some cases appear to be between squatters and students in their attitudes to insurance thus tend to only get the minimum that the law requires of them…

As there is no law that requires they spend on security then they think “Why waste the money”. They have a hit and run type attitude that their current job is just an 18month stepping stone so their real risk is not fire or security but not making the next quaters earnings look good… Because they have no attachment to the business in any meaningfull manner.

As long as nothing happens on their watch they are out of there thus don’t even care about shareholders etc…

I guess it’s another reason to talk about why regulation of the right sort would be beneficial to not just individual organisations but society in general.

Denton Scratch December 19, 2018 9:53 AM

@Clive

“No subroutine should be more than one and a half screens or printout pages in length”

Goodness gracious, you must be as old as me – possibly even older! I once had a telephone-support job that involved resorting to the source-code from time to time; it was provided in the form of green-and-white-striped print-outs, which were stored in a rack behind me.

Subroutine length: your rule is too lenient. It should be possible to completely grok a subroutine (or ‘function’, or ‘method’, or whatever new-fangled term is currently considered hip) without scrolling or paging. That means no more than one screen; nowadays that’s about 80 lines; but back in the days of ‘green screens’, it was 40 lines.

I think 40 lines is a maximum. It’s not just that you should be able to see the whole thing at once; it’s also that the block of code needs to be susceptible to complete understanding with little effort. Fancy tricks like those used in obfuscated C (or really, almost any kind of real-world C) should be rejected at the review stage. Modern compilers do the work for you – it’s fine nowadays to write code that is easy to understand.

This is one of the things I hate about Java: you can’t fit a function (or method, or whatever) into 40 lines, because the bloody boilerplate doesn’t fit in 40 lines – forget about the code itself. Hell, even the function signature sometimes runs to ten or more lines.

Tatütata December 19, 2018 12:35 PM

That means no more than one screen; nowadays that’s about 80 lines; but back in the days of ‘green screens’, it was 40 lines.

The common denominator was more 24×80, e.g. VT52, VT100, or IBM3270 (addressable area), as well as non-baseline IBM-PC and Apple II with the appropriate hardware (graphics card+monitor).

CHRIS REID December 19, 2018 3:18 PM

Aren’t we talking about the “Consultancy Layers?”

I thinks so. But they are already there or should be
as part and parcel of design of systems. There is an industry.

Folding the people, the planning, the policy into the
tools themselves only illustrates the mistake companies make
when the tools are all that is left after installation, left
to go stale in the face of evolving threats.

Yes it reveals human weaknesses in the threat model at
macro/design levels of basic premises about the nature
of the insecurity problem, but canonicalizing it as something
that must be done or you are not compliant, or making security
extremely cost prohibitive by making installing a network like
designing a museum with all the plans and consultants and lawyers
and so on, will lead to more cut-rate security installations, not less.

The field itself makes the models, why remake the field according to
a theoretical model that engulfs it? Best practices come from experience,
not hierarchical positions of “who is at the highest layer makes the rules, and dictates the best practices.”

Please do not put the politicians, Microsoft, Oracle, Google,
or even Elon Musk at the top of the OSI stack.

gordo December 19, 2018 6:34 PM

@ Clive Robinson,

I used to read OneRaindrop, but stopped after a while, as well. A quick look shows the blog as static since 2015. Regarding the threat-hunting industry, i.e., FireEye/Mandiant, and scores of other firms, I see them as precursors to AI-driven infosec. Given the offensive nature of infosec, I think it’s correct, as some say, that OODA loops will be taken over by machine processes. It’s being discussed in terms of “hyperwar.” [1][2]

Getting back to the Dulles quote, metrics and spend, etc., the so-called “missile gap” comes to mind…I would not be at all surprised to see similar claims of “AI gaps”.


[1] https://www.fifthdomain.com/dod/2017/08/07/emerging-hyperwar-signals-ai-fueled-machine-waged-future-of-conflict/
[2] http://www.tomdispatch.com/post/176509/tomgram%3A_michael_klare%2C_the_coming_of_hyperwar/#more

Matthew Merchant December 21, 2018 10:28 AM

NIST has similar guidance for federal organizations outlined in various NIST Special Publications. The three levels of organization-wide risk management outlined in NIST SP 800-39 is shown as a pyramid (that exists under another yet tier of federal laws). The cap represents an executive agency (DOD), the middle tier is the component (US Navy), and the base is an information system (an Aircraft Navigation System. It is easier to see how a higher authority’s cybersecurity policies filter down into specific systems policies using such a model.

Facto December 21, 2018 11:29 PM

“To be honest it’s been atleast a couple of years now since I saw his name come up anywhere…”

In reality, nobody reads anything you write anywhere except here, ad nauseam.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.