Unhackable Cryptography?

A recent article overhyped the release of EverCrypt, a cryptography library created using formal methods to prove security against specific attacks.

The Quanta magazine article sets off a series of “snake-oil” alarm bells. The author’s Github README is more measured and accurate, and illustrates what a cool project this really is. But it’s not “hacker-proof cryptographic code.”

Posted on April 5, 2019 at 9:31 AM25 Comments

Comments

Faustus April 5, 2019 11:30 AM

Interesting, but even the project page is too PR for me. Most cryptographic algorithms lack formal proof for their basic operations. Implementing the algorithm provably correctly is certainly an important step, but it does not make the result provably unbreakable.

Fazal Majid April 5, 2019 11:30 AM

That was my reaction as well. How can formal methods protect against side-channel or timing attacks, or untrustworthy processors with embedded Intel AMT/IME?

Frank April 5, 2019 2:43 PM

For crypto researchers this might be interesting. In the area of applied IT sec, the easy use of crypto is much more important. For instance, basics like how to use AES in -let’s say- CBC mode securely. For the average or above average developer this is a complex process, to this day very error-prone.

“Easy-to-use” crypto libraries or wrappers would be a great help for this!

Parry Noir April 5, 2019 3:12 PM

@Faustus

Most cryptographic algorithms lack formal proof for their basic operations. Implementing the algorithm provably correctly is certainly an important step, but it does not make the result provably unbreakable.

Not only that, a “provably correct implementation” is only so relative to a particular abstraction of the underlying infrastructure. Abstractions, by their nature, filter out at least some (and often a lot of) details. A lot of security problems are unnoticed because they are invisible at the level of abstraction at which security is examined.

Faustus April 5, 2019 5:23 PM

@ Perry Noir

A lot of security problems are unnoticed because they are invisible at the level of abstraction at which security is examined.

Well said!

Face April 5, 2019 6:20 PM

Oh come on! He named the compiler Kremlin.

I can’t be the only one who gets the joke.

1&1~=Umm April 5, 2019 10:53 PM

Over on the friday squid page* @Taz asked,

“Real or snake oil?”

To which my answer is,

Journalistic licence at best, more ‘click bait for the unknowing’.

The article starts with,

“‘Researchers have just released hacker-proof cryptographic code — programs with the same level of invincibility as a mathematical proof.'”

Which sounds grandiose, but actually does not hold water as an argument.

I’ll just outline some of the issues, others can fill in more 😉

There are numerous supposed ‘mathmatical proofs’ around and they usually mean squat diddly in the real world as you will find they are based on two assumptions,

1) It’s like another proof.
2) The assumption the other proof is valid.

Oh and the clasic ‘ignore reality’,

3) Scope of argument thus proof is constrained.

In effect ‘house of cards thinking’ where it’s assumed no breeze, table bumps, cats jumping up or other things not covered by the very limited scope will not happen and knock the house down…

If you look further down the article you will see the library is actualy based on ‘formal methods’. Which unfortunatly have more than a few assumptions behind them.

The first and most important is formal verification only veifies it’s input, not it’s output or what happens to that output, that is just ‘assumed’ to be secure as a given…

If people think about that a little bit, what formal verification is, is ‘A top down process that stops only just a short way down the computing stack’.

In many cases the output is effectively just more ‘source code’ thus it stops above the compiler pre-processor and all it’s subsequent parts to becoming running code.

As was pointed out long ago by one of the creaters of Unix, the compiler can be subverted in ways most, or even expert coders can not find.

But lets assume you are a real experts expert and can walk the assembler output –which is still just source code– and verify that?

What happens if I put my hack in the actuall assembler or linker or other object code libraries. Are you enough of an experts expert to walk through the actuall object code for each and every one?

But lets assume you are some kind of long snowy bearded guru steeped in loading your code with switches on the front panel, can you check the microcode inside the CPU?

Very unlikely unless you work for Intel, and since the Pentium bug, they patch the microcode everytime the CPU gets dragged into life, so ‘that’s a movable feast’ at the best of times…

But lets further assume that you are actually a hardware Guru with not just the beard, but a taste for real ale and a flat cap for the spreading bald patch the pony tail does not cover to stop the risk of sunburn if you ever go out in daylight. Can you check the translation to RTL and the underlying hardware macros used to build the chip?

Lets assume you can down to logic gate level, as you are some kind of computing diety with the abilities of thousands of others…

How about the device physics in every chip…

Security is as Einstien observed about the universe “reletive to the observer” and also bound by not just the observers scope but by the universes rules as well.

But those last bits about ‘below the CPU ISA’ are in no way theoretical there are practical attacks such as Rowhammer and Meltdown plus there was also all those I/O attacks due to DMA and similar on MMUs that popped up from time to time since atleast the 1980’s to my knowledge.

But there is another problem worse than ‘Black Swans’ which atleast are attacks possible to envisage, there are without doubt attacks to be found that have no commonality with other known attack classes, thus can not be envisaged based on existing knowledge…

So is this software library secure?

Not untill all the above issues have been fixed… That’s not to say the library is insecure but you can not prove it is secure in practical implementations, and that at the end of the day is why a lot of money is spent on EmSec (TEMPEST on steroids) and ‘segregation’ via SCIF’s etc. Or as christened on this blog ‘Energy Gapping’.

I hope that answers your question to your satisfaction (and others reading along).

1&1~=Umm April 6, 2019 12:23 AM

@Bas:

“One can use constant time algorithms against timing attacks.”

They work well on single use microcontrolers with out cache or other extended low level hardware features you find on modern desktop/server CPU chips.

For those around at the time when AES was finalised one or two people were sounding alarms about the AES competition and the fact timing channels were not mentioned at all by NIST who were in effect fronting for the NSA.

Since then others have pointed out that the competition code released on the competition pages for “speed tests” had things like loop unroling in them which had certain issues. One of these people then built exploit code via the cache timing that was shown to work fairly easily across the local area network…

As the NSA probably hoped the ‘speed test code’ ended up being copied into peoples code directly and indirectly by usage of libraries of code that were available as Open Source on the Internet.

Even now there are still computing systems using the ‘speed test code’ on accessible networks, and I fully expect some to still be around in another quater century or so.

It’s this sort of ‘known attack’ that the authors of the paper you link to were trying to address. But as noted in the intro given in the link

“‘However, in absence of further justification, the guarantees only hold for the language (source, target, or intermediate representation) on which the analysis is performed.'”

It’s just a variation on an existing formal method, thus prey to the assumptions within it, hence the warning.

The problem is that in the last year and a bit a new class of attack on the CPU hardware has occured that effects Intel, AMD, ARM and presumably other similar CPU architectures. This has been refered to by @Thoth and others as ‘The gift that keeps giving’ because new variations are still being found.

I suspect it is quite safe to predict other ‘below the CPU ISA’ attacks will be found and any one of them can be used to generate faults in the library code or excercise hardware faults that will open up other side channels other than time based ones.

The point is that formal verification and it’s methods currently is a top down approach and can only reach down a little way at best. Hardware attacks are bottom up, and way below the reach of current formal methods, and of absolutly no use against the likes of ‘fault injection attacks’ used in ‘Active EmSec’.

If you search this blog you will find comments by @Nick P, @Clive and @Wael about the issues of ‘bubbling up attacks’ from below the CPU ISA level and the fact that there is nothing current formal verification can do to stop them with current CPU architectures. Thus the best thing to do was make mitigations in hardware.

If you search this blog for @Thoth you will find he was designing such a system around the use of Smart Cards. Also that a group of security academics at UCL appear to have taken without credit the work of @Thoth and others which had discussed on this blog, and as noted ‘got it only half right’ in the process.

Security via software approaches on single CPU systems, even those with multiple cores is a ‘busted flush’ due to the hardware that actually addresses the shared control, data and address and power buses that go out to memory and other parts of the system.

Likewise multiple CPU systems that share external memory and with I/O and DMA controlers unless certain precautions are taken. The ideas behind the precautions are apparently still ‘secret’ even though a study of physics at high school level will give you all you need to know to think them out.

There is little that even machine code at the CPU ISA can do to make this area secure. In fact as has bern noted on this blog there is a real issue with ‘transparancy’ from the likes of keyboard microcontrolers through the main CPU(s) due to ‘Security-v-Efficiency’ issues, in essence the more efficient you make a CPU the OS on it and the driver codes both top and bottom side, the more transparent a system becomes thus the more internal state that leakes out.

Oh and there are discussions about ‘fault injection’ that uses ‘error handeling and correction mechanisms’ as a way to probe backwards through a computer to get at processes that most would assume are segregated or issolated, that even work backwards through the likes of data diodes, to excercise latent side channels.

It’s a hostile world out there and there must be thousands with ‘twisty little passages’ in their minds that enable them to as @Bruce Schneier puts it ‘think hinky’ and they have moved on from software exploits that current formal verification methods try to prevent.

Researching attackers have moved down the computing stack to low level hardware where formal methods can do little to mitigate attacks that rise through the computing stack, undermining the assumptions current formal verification methods are based upon.

I like the idea of what project Everest is trying to achieve, but the reality is attacks have now provably moved on beyond what it can achieve so from the more stringent security perspective it’s to little to late. However for building computers that are real walled gardens from those who have purchased them, yes Everest will serve certain interests very well.

gasche April 6, 2019 6:37 AM

I’m surprised by some reactions to formal methods here which are almost hostile.

Sure, the Everest project can make only very partial claims on side-channel attacks, does not guarantee anything if run on incorrect/compromised hardware, etc. But it also gives very very strong guarantees on the absence of implementation errors on the aspect of the implementation that are covered by its specification (various kind of programming or mathematical errors that have found their way into existing implementations, for example incorrect optimizations of big-number operations using arithmetic tricks), and on these aspects it is basically much more secure than all other existing implementations of the same protocols.

How come, when people do the hard work of providing a verified implementation that is much more secure than anything else we have so far, people decide to concentrate on what the approach does not cover, instead of recognizing the impressive advances that have been made on the parts that it does cover? Sure, exaggerated journalism is exaggerated journalism (such pieces get produced on basically any research output), and people are too quick to make hyperbolic claims about “unbreakable X” or to swim in a false sense of security from misunderstanding what is covered. But there is a lot of really solid work that can, if we adopt it and extend it and do our best to adopt these state-of-the-art development methodologies, give strong improvements to everyone’s security. Why not also celebrate that?

1&1~=Umm April 6, 2019 11:12 AM

@gasche:

“I’m surprised by some reactions to formal methods here which are almost hostile.”

As has been seen in the article under discussion formal verification and methods can and often do get very much over sold.

They have a very small part in security of systems, and they can still easily fail at what they are alleged to be able to do. This has been known since the 1950’s especially when the tools are misunderstood or misused as can easily happen.

System security is a long path full of traps for the unwary, encouraging unwary thinking in the way the journalist who wrote the article has is something that needs robust rebuttal to hopefully alleviate if not prevent future issues.

With regards,

“How come, when people do the hard work of providing a verified implementation that is much more secure than anything else we have so far, people decide to concentrate on what the approach does not cover, instead of recognizing the impressive advances that have been made on the parts that it does cover?”

You only need to look as secure messaging apps for the reason behind “How come”. Much is talked about the security of the communications link whilst they all have significant failings in the way more important end point security. Which in the apps is effectively non existant so the apps whilst sold as the most secure thing since sliced bread because of the use of crypto, are in reality very easy to subvert with a simple end run attack.

The same is true for formal methods, they are but one small link in a very long chain, making statments that in effect claim they magically strengthen all the other weak links is not just laughable it is disingenuous to the point of fraud.

Which brings us around to,

“But there is a lot of really solid work that can, if we adopt it and extend it and do our best to adopt these state-of-the-art development methodologies, give strong improvements to everyone’s security. Why not also celebrate that?”

Yes there is a lot of solid work, but it can not stand alone, the actual researchers in this case are aware of this and say so.

The thing is there is an expression of “It’s an engineers lot to run the world without recognition” that is the way the world works, people are in general not interested in engineers they are seen as like ‘the oily oik who puts air in the tires”, only recognised when they fail.

There is the old saying about “Reward, around hear is like pissing in your dark trousers, you get a short lived warm feeling, that quickly chills, and if you are lucky nobody notices.”

The same is true for researchers and most software developers, their real reward for success is to be given another through at the dice of failure, untill either they fail and get ignominy or wise up and move into other work such as managment where their failings can be passed off onto others.

Douglas Adams sumed it up when he commented about people with,

“‘And then, one Thursday, nearly two thousand years after one man had been nailed to a tree for saying how great it would be to be nice to people for a change, a girl sitting on her own in a small café in Rickmansworth suddenly realized what it was that had been going wrong all this time, and she finally knew how the world could be made a good and happy place. This time it was right, it would work, and no one would have to get nailed to anything.'”

Then in the story the world got destroyed by the Vogons to make a hyperspace bypass so she never got to tell anyone.

Joking aside I was once told “Good news does not sell newspapers, which was aplified by the explanation that “‘Mrs Smith wins the cake contest’ is a headline that is only realy of interest to those who like Mrs Smith, even in a local paper. However ‘Mrs Smith’s cake poisons the judges at cake contest’ is of much wider interest and might make it into the national press on a slow news day. But ‘Mrs Smith’s cake poisons the queen of England’ will make headlines in newspapers all over the world.”

This was quite a few years before Lady Di got killed by paps and an out of control limo driver in a Paris underpass but sad as it was it proved the words true.

The simple fact is sad as it is only those who know and like engineers or researchers are goingvto be interested in “They done good” that’s just part of the human condition get used to it or find a way to make money that brings you fame as well as fortune, then people will be interested in you outside the circle of your friends.

Faustus April 6, 2019 5:34 PM

@gasche

I am not at all against formal verification. I think it is very interesting and I have spent my time working with Agda.

But as people point out, it only covers so much. And the job of hackers, and therefore of security people, is to go directly to the areas that aren’t covered.

Denton Scratch April 7, 2019 6:15 AM

Hmm – I think the Quantum article is not the only place where over-hyping is going on.

The title of the Github page refers to ‘Agile Performance’. That would be OK on its own (despite that usage being unexplained), but the article then goes on to describe its API as ‘agile’. That is also unexplained; and I’ve never heard of an ‘agile API’ (apparently Google has, but so what).

So “I do not think that word means what you think it means” (~Inigo Montoya).

In the context of software development, ‘agile’ refers to various team management techniques that enable a team to respond more rapidly to changing requirements. It doesn’t mean ‘fast’, and it doesn’t mean ‘versatile’.

I find it off-putting when a software project’s marketing material hijacks technical terminology to make itself sound cooler.

Morne Butor April 8, 2019 5:55 AM

Do not confuse a Computer Science Research paper, and its future potential applications. It is a big step for Computer Science, but we all know that there are hundreds of other steps to make before reaching the Graal of Security. Be patient! or better yet, be impatient and make the next step happen… Formal methods are still in their infancy, it will take time for them to grow up.

Petre Peter April 9, 2019 11:55 AM

to prove that a cipher that is being used is worthless is one thing, but it is another to propose something better in its place.

Phillip C. April 10, 2019 12:32 AM

I believe the publication is “Quanta”, not “Quantum”. Though not disagreeing per se with this Blog.

wiley April 10, 2019 1:44 AM

@Derek Jones

It certainly can be, but it’s not necessarily so. Formal verification can be exceedingly useful in defining better privilege boundaries and simplifying data flow diagrams during threat modeling. For example, IIS’ HTTP.sys is (Was? I don’t keep up with Microsoft stuff) a proven correct implementation of the HTTP protocol, and only ever suffered problems when some idiot developer added a caching layer without re-verifying it. The same is true for RTOS systems like the INTEGRITY microkernel or seL4. So yes, it can certainly be a sign of snakeoil when someone claims that an entire system is proven correct and that that somehow makes it “unhackable”, but it’s still a priceless tool for reduction of attack surface area.

Anyway, this “unhackable crypto” thing just butchers the meaning of formal verification, especially since it implies that the cryptography itself is now flawless. It is useful to have cryptographic implementations which are proven correct, but not nearly as useful as systems which are exposed to complex data formats from attacker-controlled sources, which is rarely the case for cryptographic libraries. The only reason this is useful is so we don’t have to worry that, 5 years down the line, someone will say “whoops, I guess we forgot to make this multiplication over here constant time” or “darn, I guess there’s another padding oracle attack to fix”.

wiley April 10, 2019 1:46 AM

(Disclaimer: I haven’t looked at this at all and have no idea how it differs from other formally-verified libraries like miTLS)

@Petre Peter

No one is suggesting a new cipher. This is a cryptographic library.

1&1~=Umm April 11, 2019 5:22 PM

@Petre Peter:

“to prove that a cipher that is being used is worthless is one thing, but it is another to propose something better in its place.”

Err your logic is a little unsound.

To say “to prove” implies a quite high level of skill, almost certainly better than that of the ciphers designers at the time they designed it.

Usually proofs implicitly come with either their own solutions or knowledge there can not be a solution. In the former case simply because you can use the method of proof, on a new design to ‘design out’ thay failure mechanism.

fish April 16, 2019 11:38 PM

@carrots

Fun fact, the author of “Cryptography for Dummies” was fooled by that, promoting the scam artist (Robert Grant) who created that website and whose employees (including a rather well-known scam artist) all come from the same fake universities. She continues to believe that the cryptographic community is “abuzz” about this: https://twitter.com/chey_cobb/status/1111758461669384194

I also second watching the video. It’s hilarious.

1&1~=Umm June 12, 2019 2:58 PM

@Jim:

“Consider an enigma machine that changes key
every few seconds.”

You have a limited polyalphabetic cipher inside a substitution cipher. The limited polyalphabetic cipher dependent on a very limited number of fixed wheels has “cycles” that stand out. To get around this the wheels have the simple substitution slugs mounted so they can be moved with respect to the ratcheting of the wheel. But that moves but does not break up the cycles so the ratcheting happens multiple times thus truncating a cycle.

But the truncation is far from sufficient, because the wheel movment is still basically that of the mechanical odometer mile counter in older vehicals. So you need to make other changes. Firstly you need not just a lot more wheels to select from, you need many more in use. You can cheat a little by making the pucks in the wheel reversable that is you can turn them over. You can make the odometer movment different such that wheels step in a very irregular pattern, you might also make some step in reverse.

All of that was done in some of the last of the mechanical cipher machines. But at the end of the day they are all not just determanistic they are due to other defects in effect reversable. That is you can wind their internal state backwards. Which means that if you can simulate it fast enough you can go through every required state mapping out the actual state. Thus if you have a both enough ‘known plain text’ and it’s coresponding ‘cipher text’ you can analyse it and pull gently at those multiple loose threads that are the message statistics and fully determin the internal state.

Whilst there were ways that could have been done the mechanical complexity would have been to greate and friction would have stopped it working. But worse still mechanical sloop would make more than just a handfull of gearing wheels to inacurate to use. Thus the limit on mechanical ciphers was the laws of nature. Alan Turings contempory at Bletchley and arguably smarter when it came to breaking Enigma Gordon Welchman found this was a limit you could not beat by clever design. The only solution as with the Bombs was to move from mechanics, even electromechanics through into the fully electronic where other laws of nature still apply as we have found out with Moores Law –which is not a law– hitting the buffers of the real laws of thermodynamics.

Thus the trick to making secure determanistic systems is to make the unwinding of the internal state so complex not only is there not enough time in the universe to run the statistics, but also not enough matter to hold all the required states for the statistics to find sufficient meaningfull information.

Oh and the easier way, ‘Roll God’s dice and take a couple of notes for you and your buddy’.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.