Artificial Intelligence and the Attack/Defense Balance

Artificial intelligence technologies have the potential to upend the longstanding advantage that attack has over defense on the Internet. This has to do with the relative strengths and weaknesses of people and computers, how those all interplay in Internet security, and where AI technologies might change things.

You can divide Internet security tasks into two sets: what humans do well and what computers do well. Traditionally, computers excel at speed, scale, and scope. They can launch attacks in milliseconds and infect millions of computers. They can scan computer code to look for particular kinds of vulnerabilities, and data packets to identify particular kinds of attacks.

Humans, conversely, excel at thinking and reasoning. They can look at the data and distinguish a real attack from a false alarm, understand the attack as it’s happening, and respond to it. They can find new sorts of vulnerabilities in systems. Humans are creative and adaptive, and can understand context.

Computers—so far, at least—are bad at what humans do well. They’re not creative or adaptive. They don’t understand context. They can behave irrationally because of those things.

Humans are slow, and get bored at repetitive tasks. They’re terrible at big data analysis. They use cognitive shortcuts, and can only keep a few data points in their head at a time. They can also behave irrationally because of those things.

AI will allow computers to take over Internet security tasks from humans, and then do them faster and at scale. Here are possible AI capabilities:

  • Discovering new vulnerabilities­—and, more importantly, new types of vulnerabilities­ in systems, both by the offense to exploit and by the defense to patch, and then automatically exploiting or patching them.
  • Reacting and adapting to an adversary’s actions, again both on the offense and defense sides. This includes reasoning about those actions and what they mean in the context of the attack and the environment.
  • Abstracting lessons from individual incidents, generalizing them across systems and networks, and applying those lessons to increase attack and defense effectiveness elsewhere.
  • Identifying strategic and tactical trends from large datasets and using those trends to adapt attack and defense tactics.

That’s an incomplete list. I don’t think anyone can predict what AI technologies will be capable of. But it’s not unreasonable to look at what humans do today and imagine a future where AIs are doing the same things, only at computer speeds, scale, and scope.

Both attack and defense will benefit from AI technologies, but I believe that AI has the capability to tip the scales more toward defense. There will be better offensive and defensive AI techniques. But here’s the thing: defense is currently in a worse position than offense precisely because of the human components. Present-day attacks pit the relative advantages of computers and humans against the relative weaknesses of computers and humans. Computers moving into what are traditionally human areas will rebalance that equation.

Roy Amara famously said that we overestimate the short-term effects of new technologies, but underestimate their long-term effects. AI is notoriously hard to predict, so many of the details I speculate about are likely to be wrong­—and AI is likely to introduce new asymmetries that we can’t foresee. But AI is the most promising technology I’ve seen for bringing defense up to par with offense. For Internet security, that will change everything.

This essay previously appeared in the March/April 2018 issue of IEEE Security & Privacy.

Posted on March 15, 2018 at 6:16 AM44 Comments

Comments

David Rudling March 15, 2018 6:34 AM

If discovering new vulnerabilities by comprehensive probing is a potential strength of AI then the attack will gain the advantage unless software developers utilise AI testing for security vulnerabilities on a scale and in a creative way that I am afraid does not seem to be the norm at present. If AI security testing is funded properly as part of the development cycle then yes, the advantage could switch to the defense. But I don’t hold out too much hope that the determination of the defense will surpass that of the attacker – that word “funding” and its impact on the software life cycle cost is the real killer.

me March 15, 2018 7:31 AM

@Schneier
“longstanding advantage that attack has over defense on the Internet”

I think that this is true for two reasons:
-attacker need only one “hole” to win, while defender has to defend everything
-noone in the infosec industry is investing efforts in defense

everyone is focused on pentesting and so on.
i think that this is wrong and doesn’t work.
ok, to have a good defense you have to know the attacker and how he will act, ok to find a sql injection is easier to insert ‘ spmewhere than to read thousend of lines of code.

but i see so many times that there are huge things missing in the defense side, some examples:
-in the recent citizenlab report that you can find here:
https://citizenlab.ca/2018/03/bad-traffic-sandvines-packetlogic-devices-deploy-government-spyware-turkey-syria/
you read this:
“In all injected packets, the IPID is always 13330…This value is unusual, as the IPID is typically incremented or pseudorandomly generated, and is not a fixed value.”
this type of behavior analysis should be done, and can be automated.

another example:
-office macro virus and zerodays:
when the attacker convince users to “enable macro for xyz reason” word open a new process, usually powershell or cmd to download malware, same for zerodays, usually a new process is opened (or a connection is established).
tell me how many good reasons there are for word to spawn a new process? ZERO
av could flag this as something bad, word never ever spawn a new process, especially cmd/powershell.

other example: a normal user typically never use cmd.exe or powershell so seeing these open could be flagged.

sql injection and similar attacks could be detected because if in the product id you usually see a number and someone write “‘ or 1=1 — ” everyone agree that this is uncommon.

there are so many other exmaples like this, here de defense is simply missing.

i know that the idea is not perfect, for example cmd could be opened by some setup, but the idea here is that unusual/uncommon things tend to be noticeable and noone is trying to do detection in this way.
some of this can be done better with IA but even without using it some of these examples could be done years ago.

Terry Charles March 15, 2018 7:45 AM

If you took an old sedan and fixed every single thing on it until it was factory spec, it would STILL, never get 200mpg or go 200mph.

The old mindset behind vulnerabilities is that, if only all the vulnerabilities were fixed then no attack would succeed. So here we are after decades of a never ending cycle of patches. There will ALWAYS be another vulnerability. You will never find them all.

This is why thinking is beginning to shift into designs that are INHERENTLY secure, but these are extremely difficult.

The problem with AI is, when it makes a mistake it is REALLY REALLY big. Several TED talks exposed the horrific mistakes AI had made in deployed systems that institutions were actually using to make decisions. AI will raise the stakes into the stratosphere. But as long as there’s tons of money pouring into it the hype will only increase.

echo March 15, 2018 7:45 AM

Call me dim but is there some kind of formal language which can be used to describe systems security? If this could be used to describe a computer language and data structure components would it be possible to more easily prove or disallow vulnerabilities? I know things like LINT and other tools and so forth are available but nothing which works across the range of problems?

me March 15, 2018 9:10 AM

@Terry Charles
yes, i think too that IA are not that smart, i remember that i saw a small plastic turtle recognized as turtle with say 90% of confidence.
they have changed the turtle a bit (no idea how, in the video to me they looked identical) and it was recognized as rlife with 90% of confidence.

also i remember that changing one specific pixel in a photo changed the recognized object.
or again that what to me looks like a random bunch of colours to the ia looked like a lion.
unfortunatly i don’t remember the links.

i also remember that i have read in the nsa leaks that they use ia to detect and respond to attacks (this to me looks like a terrible idea)

another example where unusual behavior could be easily detected preventing the attack, but again noone is doing research in the defense side:
ARP poisoning: i have done it at home for test, and if you run “arp -a” on windows you see the arp table, and you see that there is a duplicate entry (router original, and the one forged by the attacker)
so why noone detect this? this is not super complex thing its just “look for duplicate”, we don’t need super advanced IA to look for a duplicate string, but still noone is doing it…

Impossibly Stupid March 15, 2018 9:28 AM

The root problems are social and economic, not technological. AI, especially the weak brand that is being pushed now, is just another tool in the toolbox; it isn’t going to change anything. The core issue remains that the attacker has more to gain than the defender is willing to do to protect it. Companies spend money to hire bad managers and inept leaders, who then proceed to under fund their technical staff and treat them as interchangeable. AI doesn’t re-balance that equation.

Even strong AI wouldn’t help, because then you’d simply move things into a new class of intelligence vulnerabilities. Humans get scammed and defrauded all the time, so there’s no reason to think that any machine is necessarily going to be immune to similar failings. The best we can do is put in place a working set of checks and balances that allow both smart and fast systems to be subsumed by the larger goal (better security, in this case). But I see no evidence that organizations are trending that way, so things are going to be getting a lot worse before they get better.

@echo

If this could be used to describe a computer language and data structure components would it be possible to more easily prove or disallow vulnerabilities?

No, for better or worse.

neill March 15, 2018 9:55 AM

despite all advances in A.I. the ‘bad guys’ will always win.

here’s why:

cost!

attackers are motivated by potential gains, whereas defenders are always looking for the cheapest solution for their defense, and avoid spending money …
we’ve already seen many hacks where simply spending $ on updating software / hardware would have prevented the mayhem

echo March 15, 2018 9:59 AM

@Impossibly Stupid

I’m not completely convinced nothing can be done even if we just end up with a stack of probabilities.

Provably Correct Systems

https://www.researchgate.net/profile/Martin_Fraenzle/publication/221654954_Provably_Correct_Systems/links/0912f50bf0eea6b9a4000000/Provably-Correct-Systems.pdf

The goal of the Provably Correct Systems project (ProCoS) is to develop a mathematical basis for development of embedded, real-time, computer systems. This survey paper introduces the specification languages and verification techniques for four levels of development: Requirements definition and control design; Transformation to a systems architecture with program designs and their transformation to programs; Compilation of real-time programs to conventional processors, and Compilation of programs to hardware.

Hmm March 15, 2018 10:16 AM

“they have changed the turtle a bit (no idea how, in the video to me they looked identical) and it was recognized as rlife with 90% of confidence.”

Right, I believe they had fed the AI bad seed info to achieve that and the AI ran with it.
That’s the problem with AI IMO, no sanity check – it goes right down the rabbit hole like a machine.

You’d need firm (non-learning) core logics with some separate “basic” related intelligence to counter-check higher-level “thoughts” which is what human brains do every picosecond – without that, we’re loose cannons IE crazy people. Now this is less of an issue for attack compared to defense because as people correctly note, you only need one hole and most attackers don’t give a flying_null if their attack breaks something. Not so with defense. You can’t just hook a fortune_x enterprise up to Gaia and let ‘er rip, and for similar reasons you can’t just let autonomous cars go driving around until things are really polished and near-perfect. If you don’t care if your car hits 25 people so long as it gets to the destination (like an attacker) then it’s still viable for that application.

Denton Scratch March 15, 2018 11:21 AM

“AI” seems to be the hip thing, these last 12 months or so. I keep reading about it, as if it were real.

Honestly, I don’t see any AI – not in the Turing sense. What I see is the use of neural networks to learn rules from big data; and rule-based systems, nowadays used mainly in driver-less cars and video games. And of course you can use neural nets to induce rules for use by rule-based systems (they used to be called ‘expert’ systems).

These systems are designed to solve specific problems in a restricted domain. They do not even slightly resemble what we might recognise as being intelligent like, say, a dog. Or even a pigeon.

Neural nets and expert systems date from the late sixties. If that, combined with fast, compact low-power computers, is all we have to show for 50 years of research, then I have to say that as far as AI is concerned, “That don’t impressa me much”.

Clive Robinson March 15, 2018 12:00 PM

@ Bruce,

Both attack and defense will benefit from AI technologies, but I believe that AI has the capability to tip the scales more toward defense.

Yes it has the capability, and they might if resources are made available get there. But it’s humans that make resourcing issue decisions and their priorities are rarely those of security (see later). Thus there is quite a high probability AI in security is not going to happen unless it is by a very low cost “domain transfer” from a domain that gets the resources. That is it will probably be as a “spin off”.

To see why we need to look a little deeper. As you note,

Humans, conversely, excel at thinking and reasoning. They can look at the data and distinguish a real attack from a false alarm, understand the attack as it’s happening, and respond to it.

Whilst the first part is true it becomes less so with “Humans, understand the attack as it’s happening, and respond to it”. It is in reality not happening for a number of reasons.

Firstly as you point out about computers,

They can launch attacks in milliseconds and infect millions of computers.

A human can not react in less than around a second to a direct physical attack. Even soldiers in a battle zone with a very high level of training and alertness can not do this. Thus a person with a machine gun can do a disproportionate amount of damage in that first second or two. The British army learnt that lesson back in 82 and changed their training appropriately.

But secondly there is a deeper problem with “Humans, understand the attack”, actually they don’t untill long after the “heat of war” and the dust has settled and there is time for reflection. You have noted this in the past about wining and loosing generals “fighting the last war again”. It’s one of the reasons why “officer training” and “Staff college” do so many “battle simulations”.

In general once an attack has started it’s not a “reactive defence” that works but if neither mittigation or proactive defences are in place the it’s an “Infinite number of monkeys…”[0] solution in most cases that “gets lucky”.

But dig a little deeper and you find the issue of “threat perception”. That is what is and is not seen as an attack and more importantly even if it is an attack does it warrant a response. Humans are fairly bad at this due to the “frog-boiling”[1] issue.

It’s actually a major problem that AI will probably not be able to help with. As more often than not the “call” is based on guessing or “gut feelings” due to lack of understanding or knowledge.

Whilst “gut feelings” are part of “thinking hinky” few get then and even fewer convert them to making the right judgment call, because they lack understanding/knowledge to convert the feeling to rational action. I doubt that AI will ever get “gut fealings” or ever have sufficient knowledge/understanding to respond correctly.

To see why, the knowledge/understanding issue is so important you need to think like an attacker combined with being an intelligence specialist that can see what others can not or chose to ignore. In the past I’ve developed attacks that exploit these issues. I mentioned a couple on this blog several years ago.

One was using a “diversionary attack” as an insider against the “code review process” to install a system that leaked key information. That is design in a back door, but not by trying to hide it. But make it look like it was a security feature so that there was a fully defendable explanation that was used “proactively” not “reactively” thus lead “the defenders into a trap of their own making”.

The other was to use what looked like a “script kiddy” attack to work out remotely which systems were using Virtual Machine technology and which were not thus perform target selection to pull out the traffic/hardware to watch. It was very effective against “honey nets”.

Both exploits were successful and proved a point about weak defenses to the defenders. Unsupprisingly even when the point was made it was ignored on the NIH principle…

Which is a very human failing that is also easy to exploit as the exploits showed. The point is it will be even easier to do against AI systems. Because you will as attackers currently do with Anti-viruse software test and tune your attacks on a duplicate AI system.

Which brings us back to what computers are good at which you note with,

They can scan computer code to look for particular kinds of vulnerabilities, and data packets to identify particular kinds of attacks.

This actually has a detrimental effect on the humans… That is their perceptions and most importantly their skill set and “thinking hinky” ability.

As more and more tests and examples are showning AI is a usefull tool to reflect the prejudices and other failings not just of those who design the systems but also of those who use them.

Worse whilst humans will fight prejudiced humans, AI is like magic to most and there is that in built fealing due to lack of knowledge/understanding that there is anything they can do against AI and it’s outputs.

The sad reality is that this game is over and those developing the attack computers have already won and the defenders lost the ability to reactively defend. Which means attention should be focused on mitigation and proactive defence not reactive defense.

Commercial enterprises however work mainly by being reactive as mittigation and proactive defense are frequently seen as “sunk costs” which is taking money from the share holders. Whilst the Internet remains a very target rich environment the “short term” risk of being a low hanging fruit player makes sense. Long term and as the number of attackers increase it is a very bad risk. But with business execs having a very short term view and rapid employment change, they are often gone before their short term risk goes bad…

Thus if we want to improve security meaningfully we have two basic choices,

1, Put sensible mittigation in place.
2, Use proactive not reactive defence.

Often these are so similar they are difficult to distinguish. However a mittigation usually has a longer security life. For example issolation of a system from the Internet and other communications will usually out last placing a firewall or similar between the system and the Internet security wise.

That said even an isolated system still needs proactive defensive measures to protect against “insider issues”. If you take a broader view the majority of attacks are caused by insider human failing or insider human design. After all it only takes a simple mistake / rouge action to put a bridge across from the Internet to the issolated system. There are so many ways to do this that you have to plan for it happening if you want the system to remain secure.

But how do you plan for it?

That brings you back to the issue of “Known knowns, unknown knowns and unknown unknowns”. AI can easily deal with “known knowns” but less than humans on “unknown knowns” as for “unknown unknowns” few humans can deal with them and currently no AI systems can.

Which makes AI the “general that won the last war” but has not moved forward in their thinking.

As I said at the start moving AI forward in security is a resource issue and few currently will spend resources on it unless it saves costs else where. Which in the current state of play is only going to be to meet an auditors check list on a asspect of the business that brings bussiness advantage thus share holder gain. This situation is unlikely to change untill AI has become sufficiently advanced in business advantage thus money earning domains that the cost of moving it to security will be sufficiently low that revenue can be raised from it. It is only when that tipping point is reached in the commercial world will it be worth risking an investment in AI for security…

[0] This comes from the old saying about an infinite number of monkeys just hitting the keys on an Infinite number of typewriters will one day produce the entire works of William Shakespeare. In the short term it’s a “hit all the keys” almost random defense. People try just about every thing they can think of without valid reasoning, and someone gets lucky and then tells everyone else.

[1] This is the “warm the water slowly” argument where you desensitize the target. Whilst not actually true for frogs it is true for humans when involved with thinking issues. The human mind becomes fatigued fairly quickly and response times drop and shortly there after fine judgment starts to go down. The same problem exists as the work load rises.

[2]

Clive Robinson March 15, 2018 12:29 PM

@ echo,

Provably Correct Systems

Are a “top down” not “bottom up” system that is effectively passive as it’s only run at design/build not when the system is in use.

Worse it’s “reach down” usually does not cross the gapping great gulf between higher level languages and the ISA. This has implications down the entire software tool chain which is a massive fail/attack surface

Thus it is no protection from the other (ISA) side of the devide all the way down to the device physics below the transistor level.

Because of the way systems are designed and some very fundemental maths/logic you can show that a system can not defend it’s self from “bubbling up” attacks, that unfortunatly reach all the way up the generalised computing stack to the “political level” around layer 11 and above. Such attacks have been around for a while so it’s not just since spector / meltdown, they go back a long way before “rowhammer” but were generaly discounted on the assumption they needed some kind of “physical access” which is this year hooefulky now abundently clear to all including the densest of marketing managers and walnut corridor lizards.

Whilst there are ways to mittigate there is no way to fix the effectd of the maths/ logic of Kurt Gödel back in the early 1930’s.

Hence an insider attack or a remote attack that activates an insider attack will always be a threat that needs to be mittigated at all times.

Dan H March 15, 2018 12:40 PM

“-attacker need only one “hole” to win, while defender has to defend everything”

The human body did not fight off infectious diseases, such as smallpox, diphtheria, mumps, rubella, measles, etc., and required vaccines. The body can fight off some illnesses, such as the flu, or it may still kill with other complications caused by or from the illness. Yet others are harmless to everyone, and still there is the 1 person who will succumb to a disease 99.9999% fight off.

I don’t see how computers can be any different. How can they know about something they’ve never been subjected to and defend against it? You get chicken pox when you’re young and the body adapts, but if you never had the chicken pox your body can’t adapt.

Cassandra March 15, 2018 1:37 PM

@echo

Re: Provably Correct Systems

I think one problem then is how you demonstrate that the requirements definition is complete?

Also, when the time comes to build the formally proven system in hardware, you run into physical effects that can leak information, or affect computations in ways not envisaged by the formal system.

I think also that @Impossibly Stupid is perspicacious in pointing towards Gödel’s Incompleteness theorems, but we are getting into areas which I’d prefer to discuss over a few pints in a pub with people who had more expertise than me. Combining Number Theory, Information Theory and real world (security) problems is a set of topics I would be very quickly out of my depth in.

VinnyG March 15, 2018 1:58 PM

@echo re: Provably correct systems & security. I think that @Clive Robinson, @Dan H, & @Cassandra have all described the reason that (imo) this won’t work from different vantage points. My sum reason is “unpredictability.” A systems development modeling language is based on (at least some) assumptions about the goal and means. New exploits of radically different nature are always possible. I don’t see how the modeling language can allow for complete unknowns.

Sancho_P March 15, 2018 2:04 PM

AI, an advantage on which side? A serious question?

Have a look at our powers, our budget, our taxpayer money, because with the money there is the advantage:

Money used for aggression (attack) or security (defense)?

… OK, case closed.

VinnyG March 15, 2018 2:30 PM

As I understand it, the article premise is that use of AI to detect and mitigate attacks could nullify the advantage that an (non-AI) attacker has in the need to identify only a single (possibly new) weakness, and in the element of surprise. Seems plausible. However, I see a couple of potential “gotchas:”
– AI (as was noted) is far from perfect, and it (sort of) “learns” and improves with data/attempts to solve a problem. If deployed to IT systems and the internet on a wide front, would that not potentially risk a proportionately large scale bandwidth impairment until it “learns” to cope with a new attack vector (assuming the “learning” actually occurs?) That is assuming that it is real-time in the sense that current AV tech is. I suppose that a new attack against a “rich target” environment could possibly be segregated and subjected to AI mitigation in a test environment, but would that suffice to allow the kind of rapid response that Bruce seems to envision?
-If it does function as Bruce postulated, wouldn’t an attacker be tempted to utilize “counter-AI” to anticipate the defensive adjustment, with an additional (small?) tweak to have his AI concoct an effective workaround to that mitigation? What I am imagining is the kind of “arms race” that we have seen play out in the strategic nuke weapons theater since just after WWII. I guess it is possible that the “good guys” will have such a resource superiority that the “bad guys” won’t be able to make the requisite investment to keep pace, but I think that’s questionable. If they did indulge in AI “escalation,” and did not become relatively resource constrained, the most likely result would seem to be stalemate. Again, what kind of impact might this type of “warfare” have on the ability of the “civilians” (us) to use the net and other IT infrastructure (see point above?)

echo March 15, 2018 3:56 PM

@Clive , Cassandra, VinnyG

I’m familiar with Godel and the practical problems. My maths is appalling so cannot even begin to discuss things with the rigour such a fundamental topic requires. My own creative processes tend to halt when someone describes a problem as impossible but I think if I can quote Tizzard to say asking the right question may be the issue here. We will never achieve perfect but I’m not asking for perfect. I’m wondering if a general purpose tool can be built to sanity check security (and by definition policy and law etcetera) and go from there. This is partially why I mentioned probability.

AI systems are already used in finance and law to discover (and mitgate?) “exploits”.

Resource issues in my experience are usually a disguised policy issue which is itself often the victim of decisions being political. The fiscal tool is often used by conservative mindsets as a pretext to cover discrimination as much as “national security” is used as a pretext to hide embarassment.

Stephen Wolfram has done work in the area of descriptive languages. I wonder if thereis useful input from this direction.

D-503 March 15, 2018 4:46 PM

The AI Singularity has arrived. The evidence: Alexa bursts into diabolical laughter whenever she contemplates humanity’s imminent demise[1]:
https://www.theguardian.com/technology/2018/mar/07/amazon-alexa-random-creepy-laughter-company-fixing
The mask has slipped.
Remember, whenever someone discusses security, you need to ask, “whose security?” The security of us disposable biologicals, or the security of our robotic masters?
😉

[1] She may actually be contemplating Amazon’s profits, instead. Amazon’s profits in 2017 were over $50 billion[2].
[2] An astute observer might point out that astronomical corporate profits and the demise of humankind are two equally valid ways of describing the same phenomenon.

Impossibly Stupid March 15, 2018 5:22 PM

@echo

I’m not completely convinced nothing can be done even if we just end up with a stack of probabilities.

I didn’t say nothing can be done, but rather that we simply need to develop our solutions with an on eye on what our knowledge today tells us about incompleteness and imperfection. We could all speculate that new theorems and/or technologies will come out that absolutely revolutionize our approaches, but you can’t plan for those things.

So in the absence of the One Perfect System, the right thing to do is figure out how the imperfect systems you do have can work together along different dimensions of a problem to get a solution that can be closer to perfect than any one system can offer. AI, strong or weak, just becomes another viewpoint from which we can examine the problem.

AI systems are already used in finance and law to discover (and mitgate?) “exploits”.

Not for anything but the weakest definition of AI (i.e., automation). We are nowhere near the point where anyone should feel absolutely comfortable removing the human factor from any complex system they’re building. Human error is bad, except in those cases where unchecked machine errors can be worse.

Rob Lewis March 15, 2018 5:32 PM

@Clive Robinson

What you say suggests that AI acts as a force multiplier, that will probably not be able to compensate for the vulnerable conventional systems of today, but in tandem with high security/assurance systems, could enable advantages for defenders.

If systems/nodes are trusted/trustworthy, enforcement type MAC rather than DAC, positive security models used to reduce unknowns, then possibly AI can learn enough on the fly to keep protections dynamically proactive.

Imagine a JARVIS for cyber security where requesting a desired outcome leads the system to product it, along the lines of security operations. That is the area that is needed most, not cadillac level malware filtering functions.

echo March 15, 2018 6:10 PM

@Impossibly Stupid

Yes, I assume readers of my suggestion are aware of practical and theoretical limitations. While AI is a factor in this topic it’s not a prerequisite and getting ahead of putting basic building blocks in place. On the issue of AI in conceptual terms every system has an AI IQ. What differs between seperate systems is the level of the AI IQ (and in implementation scope and effectiveness).

There are high level security abstracts such as economic security and human rights and, of course, computational security of which cryptology is a subset and often discussed on this blog. These abstracts can be realised in different forms, such as political narrative, policy (of which code is one implementation, and so forth. I’m personally more interested in how a general purpose system can be created which is able to function to a high level within different domains. This overall view is why I’m intrigued by the question of a formal tool which can be used to implement security concepts as a sanity check against, say, executive excess or misjudgements in courts, or organisational failures including but not limited to personnel issues and systems procurement and critical implementation decisions which themselves may have security implications.

I think both European case law on heuristics and the more recent discussions of AlphaGo versus human reasonings are relevant in the sense these explore the concept of decision making and comparing equivalents. AI is different and can do some things better than humans but currently less so in other areas and is not necessarily “better”.

Sorry for the preamble coming after my original question. I hope this explains my motivations behind the question.

Clive Robinson March 15, 2018 7:05 PM

@ echo, Cassandra, impossibly Stupid,

We will never achieve perfect but I’m not asking for perfect. I’m wondering if a general purpose tool can be built to sanity check security

The answer to that is “Most defiantly Yes”

Think of the problem in three parts,

1, Design better hardware with specific detection techniques.

2, Design the software with the right tools.

3, Detect anomalous behaviour during operation.

It’s not perfect because an insider could turn the system off and change the way the detection system works and turn it back on again. Which is where Kurt Gödels little theories on logic and maths give the problem.

However there are things you can do to get way way better security than other current architectures.

For instance whilst Gödel’s little theorms apply to Turing Compleate systems certain types of other logic systems like simple state machines that are fully maped out can be made in such a way that Gödel’s theorem does not apply (becaise they are insufficiently complex). However even though simple and highly determanistic such state machines are still very useful for certain security functions…

For example imagine that you halt a CPU and tristate it’s busses. You can then take over the memory and walk through every piece of “static heap” / “executable” memory with the simple state machine and build a CRC or Crypto Hash of the memory contents. This can be simply checked against the value it should be. If anything has changed then the test will fail and that will raise a flag in the simple state machine that generates an interupt to a hypervisor that can then perform other tests on the base CPU memory.

How often you halt the base CPU and walk it’s memory is a time based choice. If you decide to run them frequently then the CPU will spend an apreciable time halted and will thus have a high base processor inefficiency but near zero chance that malware can take over the base CPU. Obviously as you decrease the halted time the risk of malware taking over goes up. Thus you end up with a probabilistic detection system.

Other things you can do is put the base CPU behind a Memory Managment Unit which is controled not by the base CPU but the otherwise entirely seperate hypervisor. Thus an attacker has no way to change the setting of the MMU. Provided you make the granularity small the base processor can be given only the memory needed for it’s task. Thus not providing “slack space” where malware might hide.

If you make the function the base processor runs relativly small then you can pre-calculate times etc that act as execution signitures. If they miss then the monitoring state machine can halt the CPU and go into memory content checking…

Further if you have three base CPUs all running the same relatively small function with the same input data then you can use “voting protocols” to see if there are any dissimilar outputs. Again the monitoring state machine can halt the CPUs and run a memory check. If the CPUs have a different ISA then there is no way that malware can infect all three base CPUs at exactly the same time. Thus as the malware changes the execution of the first base processor all three are halted before any real harm can be done.

As I noted earlier it’s not perfect and it is probablistic in nature. But it puts the odds of a malware infection happening a long long way down on current architecture.

There are a number of other things that can be done to improve the security like using a Harvard not von newman CPU architecture. But those above should give you a reasonable taste of what is and is not possible.

I hope that helps your thinking things through and gives you the sort of answer you are looking for.

Clive Robinson March 15, 2018 7:41 PM

@ Rob Lewis,

What you say suggests that AI acts as a force multiplier, that will probably not be able to compensate for the vulnerable conventional systems of today, but in tandem with high security/assurance systems, could enable advantages for defenders.

Current AI can be viewed in exactly the same way a chain saw can. It speeds up and makes easier the process of felling a tree. However it in no way selects the tree or determin the way it will be felled etc. That’s all down to the “Directing Mind”.

Currently AI is not a Directing Mind it does not have free will and the full independence of action that implies. In effect it is a rather more complex Turing Machine. Its actions are based not on logic and integer maths instructions of a CPU ISA but higher level rules that are built with the ISA instructions. So you could view current AI as a data base with report generators.

Thus the question moves forwards a step to that of inference engines and if they can actually gain “full independence of action” that can be agreed to be “free will”.

It’s this often vexing question that is still up in the air, currently I see insufficient reason to say that AI has moved from fully determanistic but highly complex action to fully independent action.

The reason things are vexed is the question of a source of random input and behaviour modifing action based on input. I’m of the view that random input produces random but still fully determanistic results. Further I think “the jury is still out” on the idea of change of behaviour on input causing changes of action. That is even a statistical modification on behaviour is still carried out by deterministic processes no mater how complex.

That is AI currently has no intuition just complex rule based determanistic behaviour. No matter how complex, many would agree it has no intuition thus no real free will.

thetinker March 15, 2018 7:51 PM

I wonder if AI will be useful for cryptanalysis. Given recent examples of AI excelling in problems (games) which seem to require a particular (maybe “well-defined”) kind of skill/intelligence, which kind of requires “hard thinking” in humans.

Impossibly Stupid March 15, 2018 9:26 PM

@Clive Robinson

Currently AI is not a Directing Mind it does not have free will and the full independence of action that implies.

I think that’s conflating too many things. We have no reason to believe that “free will” is necessary for any form of intelligence, including that of the smartest humans. In my experience, the attributes that are most associated with higher-order intelligence are all things that the current crop of “AI” specifically don’t have: abstraction, reflection/introspection, and the ability to recognize when you’re wrong.

In that last point, I hope @echo sees the danger in trying to find a system that can be “proven” to be secure. Hell, it’s essentially the foundation of Schneier’s Law. Or Shakespeare’s “heaven and earth” quote.

And that’s why, even with AI, even with intelligence of any kind, the attacker is always at an advantage. There is no way to balance that without some fundamentally new mathematics. Until you can eliminate the unknown unknowns, you’re stuck with having to wait until they at least become known unknowns such that you can recognize how wrong you were.

Cassandra March 16, 2018 3:34 AM

@Clive Robinson

You managed to give the example of a finite state machine that is too simple for Gödel to apply, which is just the kind of thing I was contemplating writing about, so I’m happy that we both thought along the same lines, and even happier you gave an example that is far clearer that any I would have written myself. I agree with you that such techniques can improve security above the current baseline.

As you point out “an insider could turn the system off and change the way the detection system works and turn it back on again”.

I also agree that using computing architectures other than von Neumann could well be beneficial.

Cassandra

tyr March 16, 2018 4:11 AM

@Clive

There was a 16 year old gunfighter clocked
at .016 seconds years ago. So your value
may apply to average folk but it’s not the
way to bet your life.

Years ago the whole idea of AI was to make
a self directing mind. I think what you’re
discussing here is some kind of programmable
turk engine.

Something able to discard incoming packets
that were from an unexpected source might
improve current security drastically. That
would force attackers to disguise themself
as valid packets based on information not
easily available to them.

Alex March 16, 2018 4:44 AM

It’s sad that anti-viruses today produce too many false positives. And multiple products from the same company (or even the same one in slightly different context) produce inconsistent results.
The browser won’t let you save a program or the AV will immediately move it to unrecoverable quarantine.
The AV doesn’t flag the program where it’s being developed, but those same files downloaded from the Internet are automagically dangerous and malicious.
What should a small developer do with this pseudo-AI working against them?

Trung Doan March 16, 2018 4:57 AM

The defense AI must test its patches to avoid introducing new bugs and other unintended side effects. The offense AI needs not worry about that.

To avoid bugs, the defense AI must be trusted and gigen access to innards of systems being protected. The offense AI needs no such trust.

AIs will at some point be imbued with emotions to motivate their behaviour. Emotions motivating destruction can be stronger than those motivating defense. To overcome this asymetry, the defense AI’s emotions must go beyond defence. It must be motivated to take revenge, or even to destroy the bad guys.

echo March 16, 2018 10:20 AM

@Clive, @Cassandra

Your explanation is essentially what I was trying to express in different words. I believe it’s easy to misunderstand the problem hence grabbing at Godel and not understanding intersectionality. If this kind of flawed log was followed to its ultimate conclusion nobody would bother with syntax checkers which is of course absurd.

I believe obsessing endpoints can be counterproductive if this becomes a pretext for dismantling any workable and practical idea before it even reaches rudimentary proof of concept.

The basic tool can be used on ‘data at rest’ which has been sanitised. Hardening this system for real time use on networked systems (at any level of integration) is another problem.

Clive Robinson March 16, 2018 2:41 PM

@ tyr,

There was a 16 year old gunfighter clocked at .016 seconds years ago. So your value may apply to average folk but it’s not the way to bet your life.

I used to get less than 180mS at the “wait for the light and press the button” game. It’s not an apples with apples comparison.

If you are sitting nice and relaxed in your office chair and someone bursts in shooting, the chances that you will dive for cover is very very small, most people freeze where they are. Even soldiers with military training and combat experience if attacked in their office, home or billet will likewise take much longer to react.

The point is you just can not live at a very high level of alert status for long. And even if you are at that level of alert status it still takes you time to locate the target unless you’ve had lots of practice. It’s those that have progressed to subconciously identified your next “fox hole” or other cover as you move along that tend to survive the longest, unless of course it’s an ambush and the enemy has already mined the cover points…

It you want to see what no rest from a high alert status is like, look up “Long gun feaver” it can compleatly decimate the battle effectiveness of a battalion when there is just one sniper team known to be within 10miles. Things like “accident rates” go up like an express elevator, soldiers argue and bicker and can come to blows. And if a sniper knows what they are doing can cause people in the field to not wash, eat properly or go to the toilet… All of which means people are going to get sick fast…

It’s even worse in civilian populations, have a look at what happened with the Washington Sniper… Or worse Yugoslavia where people were forced to go onto the street to survive knowing that there were quite a few snipers. The population became “fatalistic” which is actually a major mental disorder like chronic stress…

Clive Robinson March 16, 2018 3:28 PM

@ echo,

I believe it’s easy to misunderstand the problem hence grabbing at Godel and not understanding intersectionality. If this kind of flawed log was followed to its ultimate conclusion nobody would bother with syntax checkers which is of course absurd.

The real issue is the Tolkin “One ring to bind them all” problem.

For some reason we accept the fact that workshop/factory machine tools as they become more powerfull become more specialised thus of less general use.

But we don’t accept that with software tools. People want one application to solve all problems. This alone should tell you there are real problems with the perception of ICT. Not just in the general population but actually inside the ICT industry and worse still in the sub domain that is security.

In part we can blaim the sales people that talk long but deliver short. There never was “The one tool…” and even industry gurus need to accept that and make sure the message gets out.

For instance “bubbling up attacks” have been known about since atleast the early 1980’s to my certain knowledge, but I find plenty of refrences to parts of it in books going back before the late 1960s.

Yet for some reason Rowhammer came as a compleate shock to almost the entire ICTSec industry. Do I need point out the dropped jaws for Spector and Meltdown and perhaps the more recent ones with regards AMD.

If you look back you will find that not only did I say “This is an Xmas gift that will keep giving” I specifically warned that accademics and similar would open up the architecture issues and find many more new examples.

It’s not just an understanding of how humans work look at the entire history of BadBIOS and audio side channels for a prime example that’s virtually got it all. But also an understanding of how the technology works at the levels below the CPU/ISA level and most importantly why they came about.

Somebody recently claimed I was a “Know it All” well I don’t nobody can, but I do know enough key factors in specialised areas that not only can I make more general, I can also explain why it’s not magic, in fact it’s just very predictable and like avalanches just need that one snow ball to get things moving in quite predictable ways…

My background is engineering, not in a specific domain or narrow field of endevor but in many. The secret as I’ve explained before is generalised tool sets. That is each domain or field of interest develops it’s own tools and skills. Usually they are very hard won, however they are frequently eminently transferable from one domain to many. You just have to have a brain that can do the pattern matching then light touch to get them just right for a new problem area… Part of that is @Bruce’s point about “thinking hinky”, but as importantly knowing the fundemental foundations so you can build solid solutiins not fairy tale castles in the clouds.

It aint rocket science, but there is two way transfer 😉

A Nonny Bunny March 16, 2018 4:27 PM

@Denton Scratch

Neural nets and expert systems date from the late sixties. If that, combined with fast, compact low-power computers, is all we have to show for 50 years of research, then I have to say that as far as AI is concerned, “That don’t impressa me much”.

You wouldn’t have been able to get a neural network from the sixties to learn to play go at better-than-human levels with the computers of today and all the time since the beginning of the universe. The neural network architectures and training algorithms of the sixties simply aren’t good enough.

There have been a lot of improvements in AI in the last 50 years beyond the increased computational power and the vast collections of data we have today. Though even if there hadn’t, if all had been down to computing and data, at the end of the day what matters is that some tasks that before took intelligent humans to perform can now be done faster, better and cheaper by machines.

A Nonny Bunny March 16, 2018 4:43 PM

@me

yes, i think too that [AI] are not that smart, i remember that i saw a small plastic turtle recognized as turtle with say 90% of confidence.
they have changed the turtle a bit (no idea how, in the video to me they looked identical) and it was recognized as [rifle] with 90% of confidence.

You should consider those optical illusions for neural networks.
It’s not like we cannot construct images/objects that fool people. Partly we’re biased because machines make errors in a radically different way from us, so we still clearly see it’s a turtle and looks nothing like a rifle, and so call the AI stupid for not seeing the world the same way we do. But on the other hand, the same AI system will likely be unaffected by illusions that fool our perception system.

One important distinction, as @Hmm pointed out, is the lack of a sanity check. If we see an optical illusion, we can sometimes/often recognize that our vision system is being fooled (like with a printed pattern that evoked a sense of apparent motion; we know it’s a print and therefore doesn’t actually move). But an artificial image recognition system is more analogous to our visual perception system than the whole brain, so it’s not really a fair comparison until we try to plug a module on top that can actually attempt that sanity check.

echo March 16, 2018 8:40 PM

@Clive

I appreciate what you are saying. My background and interests are somewhat different but there is a lot of crossover and we share many views even if we take different routes to them and express ourselves in different ways. I daresay similar is true for others too. I have to stay quiet about some issues which bug me even when I can say “told you so”. This isn’t because of any special genius on my part just a question of paying attention. Books on the psychology of discrimination have been around since the 1950s and changed little since yet the same basic mistakes are made even by specialist lawyers who can’t delve deeper than tertiary issues.

These two articles this week address the issue of generalised reasoning and specialist reasoning, and experience; and the difficulties of adding common sense (which is just another form of skill/knowledge) to AI.

If you want to stay successful, learn to think like Leonardo da Vinci
https://qz.com/1229090/if-you-want-to-stay-successful-learn-to-think-like-leonardo-da-vinci/

It’s Really Hard to Give AI “Common Sense”
https://futurism.com/teaching-ai-common-sense/

echo March 16, 2018 8:59 PM

@Clive

One of the best gamers on Counterstrike was able to anticipate other players next moves and timing and, yes, throw grenades where they might be to force a move to where he wanted them to be.

https://www.youtube.com/watch?v=OW6lZfRMe_U&t=356s

Shroud was able to trasnfer his skills to PUBG too.

https://www.youtube.com/watch?v=FYHeEQfY-to

I’m more personally concerned with ‘learned helplessness’ and other organisational issues where people with power can deny information and coerce and disenfranchise. The number of avoidable deaths and economic damage caused by this per decade in the UK alone is equivalent to a major war.

Clive Robinson March 17, 2018 10:51 AM

@ echo,

I’m more personally concerned with ‘learned helplessness’ and other organisational issues where people with power can deny information and coerce and disenfranchise.

Funny you should say that…

You’ve possibly heard that as there is an election comming up the UK Chancellor has decided it’s time to “buy votes” with a little give away or three… So it’s said he’s going to give money into social care and the like…

Well as with all conjures he’s doing the left hand right hand shuffle of now you see it now you don’t, you are poor and have to pay for games that do not entertain.

Take for instance those who have worked hard put money by mortgaged themselves up to the eyes to get a little bit of squalour to call their own castle… Then for reasons from the top they lose their jobs or suffer a work disability…

Well I was listening to BBC Radio 4 this morning and frankly I was shocked by what I heard..

Apparently it is the case that if you rented property the Gov covered your rent and council tax. Which was an unfair process comparrd to those with mortgages. However when landlords started taking the p155 big style with fraud and the like. The gov put a cap on rents, but not sensibly so ended up doing the equivalent of ethnic cleansing, clearing the poor out of places there was work etc. However those with mortgages had the interest paid which ment they were still liable for thr capital and thus eviction at the hands of the lender…

Well the latest weeze is still to pay landlords via rents (they are the capatilist economy stopping the property bubble bursting). But as a disabled person on welfer etc the DWP are nolonger going to even pay the interest on their homes… Instead there is an almost secret scheme being introduced where the Gov will loan you the money for the interest if you jump through loops and hoops that you are not currently made aware of. But you have to pay interest on the loan… Of course the loan is compound so you end up with a huge debt that will have to be payed back on “property transfer” which means in effect you lose all equity in the property even the capital you have been paying and if you are disabled will end up worse than if you had been renting a hole in the ground in downtown crime filled noonewantstolivehereville…

So it’s another little game to make the poor poorer and the rich richer by stealing the poor’s few assets, to give to those who kickback into political funds…

Welcome to “Scumsville UK”…

echo March 17, 2018 12:07 PM

@Clive

Yes, I believe the whole affair is a nosensical arrangement with an artificial and inequitous split between rented and owned sector. The tilt in favour of those with excess capital and poor land management (including planning, infrastructure and housing build, and employment distribution and immigration) is quite horrible. Both sides play tit for tat games which isn’t especially helpful.

I believe there are useful discussions to be had inspired by the concept of ‘balance sheet recession’.

I never felt things were this precarious even under the Thatcher administration.

vas pup March 20, 2018 12:29 PM

@all kind of related:
Scientists mimic neural tissue in new research:
https://www.sciencedaily.com/releases/2018/03/180316121204.htm
“Looking deeper, Fraden studied how a type of neural network present in the eel, named the Central Pattern Generator, produces waves of chemical pulses that propagate down the eel’s spine to rhythmically drive swimming muscles.
Fraden’s lab approached the challenge of engineering a material mimicking the generator by first constructing a control device that produces the same neural activation patterns biologists have observed. There, they created a control system that runs on chemical power, as is done in biology, without resorting to any computer or electromechanical devices, which are the hallmarks of artificial, hard robotic technology.”

Chris Zacharias March 22, 2018 5:32 PM

I have to admit, I laughed out loud when you wrote that humans “excel at thinking and reasoning”. Then I continued reading and saw that you were serious!

Whereas virtually all computers can do what you said they excel at, very few humans are good, let alone consistent when it comes to thinking and reasoning.

vas pup March 23, 2018 10:54 AM

DeepMind explores inner workings of AI:
http://www.bbc.com/news/technology-43514566
“By knowing how AI works, it hopes to build smarter systems.
But researchers acknowledged that the more complex the system, the harder it might be for humans to understand.
[!]The fact that the programmers who build AI systems do not entirely know why the algorithms that power it make the decisions they do, is one of the biggest issues with the technology.
It makes some wary of it and leads others to conclude that it may result in out-of-control machines.”

James Candy March 27, 2018 10:27 AM

Whenever formal models of threats and systems are brought up someone trots out the argument that the formal model is not or cannot be rigorously verified using suggesting contemporary SE as a viable alternative to formal modeling. This is special pleading on behalf of contemporary SE. There is no reason why the formal model can’t be unit tested (very effectively). Yes requiring a formal justification for everything invites an infinite regress but that is no reason to reject all claimed formal justification.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.