NIST’s Post-Quantum Cryptography Standards

Quantum computing is a completely new paradigm for computers. A quantum computer uses quantum properties such as superposition, which allows a qubit (a quantum bit) to be neither 0 nor 1, but something much more complicated. In theory, such a computer can solve problems too complex for conventional computers.

Current quantum computers are still toy prototypes, and the engineering advances required to build a functionally useful quantum computer are somewhere between a few years away and impossible. Even so, we already know that that such a computer could potentially factor large numbers and compute discrete logs, and break the RSA and Diffie-Hellman public-key algorithms in all of the useful key sizes.

Cryptographers hate being rushed into things, which is why NIST began a competition to create a post-quantum cryptographic standard in 2016. The idea is to standardize on both a public-key encryption and digital signature algorithm that is resistant to quantum computing, well before anyone builds a useful quantum computer.

NIST is an old hand at this competitive process, having previously done this with symmetric algorithms (AES in 2001) and hash functions (SHA-3 in 2015). I participated in both of those competitions, and have likened them to demolition derbies. The idea is that participants put their algorithms into the ring, and then we all spend a few years beating on each other’s submissions. Then, with input from the cryptographic community, NIST crowns a winner. It’s a good process, mostly because NIST is both trusted and trustworthy.

In 2017, NIST received eighty-two post-quantum algorithm submissions from all over the world. Sixty-nine were considered complete enough to be Round 1 candidates. Twenty-six advanced to Round 2 in 2019, and seven (plus another eight alternates) were announced as Round 3 finalists in 2020. NIST was poised to make final algorithm selections in 2022, with a plan to have a draft standard available for public comment in 2023.

Cryptanalysis over the competition was brutal. Twenty-five of the Round 1 algorithms were attacked badly enough to remove them from the competition. Another eight were similarly attacked in Round 2. But here’s the real surprise: there were newly published cryptanalysis results against at least four of the Round 3 finalists just months ago—moments before NIST was to make its final decision.

One of the most popular algorithms, Rainbow, was found to be completely broken. Not that it could theoretically be broken with a quantum computer, but that it can be broken today—with an off-the-shelf laptop in just over two days. Three other finalists, Kyber, Saber, and Dilithium, were weakened with new techniques that will probably work against some of the other algorithms as well. (Fun fact: Those three algorithms were broken by the Center of Encryption and Information Security, part of the Israeli Defense Force. This represents the first time a national intelligence organization has published a cryptanalysis result in the open literature. And they had a lot of trouble publishing, as the authors wanted to remain anonymous.)

That was a close call, but it demonstrated that the process is working properly. Remember, this is a demolition derby. The goal is to surface these cryptanalytic results before standardization, which is exactly what happened. At this writing, NIST has chosen a single algorithm for general encryption and three digital-signature algorithms. It has not chosen a public-key encryption algorithm, and there are still four finalists. Check NIST’s webpage on the project for the latest information.

Ian Cassels, British mathematician and World War II cryptanalyst, once said that “cryptography is a mixture of mathematics and muddle, and without the muddle the mathematics can be used against you.” This mixture is particularly difficult to achieve with public-key algorithms, which rely on the mathematics for their security in a way that symmetric algorithms do not. We got lucky with RSA and related algorithms: their mathematics hinge on the problem of factoring, which turned out to be robustly difficult. Post-quantum algorithms rely on other mathematical disciplines and problems—code-based cryptography, hash-based cryptography, lattice-based cryptography, multivariate cryptography, and so on—whose mathematics are both more complicated and less well-understood. We’re seeing these breaks because those core mathematical problems aren’t nearly as well-studied as factoring is.

The moral is the need for cryptographic agility. It’s not enough to implement a single standard; it’s vital that our systems be able to easily swap in new algorithms when required. We’ve learned the hard way how algorithms can get so entrenched in systems that it can take many years to update them: in the transition from DES to AES, and the transition from MD4 and MD5 to SHA, SHA-1, and then SHA-3.

We need to do better. In the coming years we’ll be facing a double uncertainty. The first is quantum computing. When and if quantum computing becomes a practical reality, we will learn a lot about its strengths and limitations. It took a couple of decades to fully understand von Neumann computer architecture; expect the same learning curve with quantum computing. Our current understanding of quantum computing architecture will change, and that could easily result in new cryptanalytic techniques.

The second uncertainly is in the algorithms themselves. As the new cryptanalytic results demonstrate, we’re still learning a lot about how to turn hard mathematical problems into public-key cryptosystems. We have too much math and an inability to add more muddle, and that results in algorithms that are vulnerable to advances in mathematics. More cryptanalytic results are coming, and more algorithms are going to be broken.

We can’t stop the development of quantum computing. Maybe the engineering challenges will turn out to be impossible, but it’s not the way to bet. In the face of all that uncertainty, agility is the only way to maintain security.

This essay originally appeared in IEEE Security & Privacy.

EDITED TO ADD: One of the four public-key encryption algorithms selected for further research, SIKE, was just broken.

Posted on August 8, 2022 at 6:20 AM43 Comments

Comments

Mathieu August 8, 2022 7:45 AM

I often see discussion about Symmetric and Asymmetric encryption and Digital Signature, but why isn’t there much discussion about Key Exchange (distributed key generation with PFS, à la Diffie-Hellman).

If we have Key Exchange, Symmetric encryption and Digital Signature, the use for Asymmetric encryption is greatly reduced.

Is that a problem that’s “already solved” (in the sense that we already have quantum resistant key exchange algorithms)?

Carl Mitchell August 8, 2022 8:21 AM

Cryptographic agility is important for safety when deploying untested new algorithms, but it’s also a liability because it often allows downgrade attacks and creates confusion among users (see TLS cipher suites). Within a single protocol version there should be no agility, only across versions. That makes it far harder to attack, though it’s a higher-level concern than what NIST is standardizing.

I’d also say we need hybrid systems for all these PQC schemes, where both a classical scheme and a PQ scheme need to be broken to break the desired property. E.g. signatures should require both a PQ signature and an RSASSA-PSS signature to validate.

Al Sneed August 8, 2022 8:23 AM

NIST is both trusted and trustworthy

DJB doesn’t seem to think so. Any comments on his latest lawsuit?

Carl Mitchell August 8, 2022 8:26 AM

@Mathieu, The combination of a Key Encapsulation Mechanism or Key Exchange Mechanism with an Authenticated Encryption mechanism is commonly called “Asymmetric Encryption”. There aren’t any asymmetric schemes that can directly encrypt large ciphertexts in an efficient manner, so some form of hybrid system is used instead. TLS does this, ECIES does this, age does this, PGP does this, etc.

Anton August 8, 2022 9:13 AM

I’m a bit confused about the difference between “general encryption” and “public-key encryption”? Does “general encryption” implies a hybrid scheme with a PQ key-exchange algorithm plus a regular symmetric encryption scheme, or is it the other way around?

Denton Scratch August 8, 2022 9:53 AM

We have too much math and an inability to add more muddle

I take it that “muddle” is code for reversible scrambling schemes, such as S-boxes.

I, a layman, don’t have standing to question the Great Bruce. But don’t these scrambling operations amount to obfuscation? Doesn’t the security of a scheme depend entirely on the maths? Like, isn’t the “muddle” just something that slows down (maybe a lot) the cryptanalytic maths?

My instinct is to want a crypto scheme that doesn’t need “muddle”, because the maths guarantees that a message can’t be decrypted except by brute force (or knowing the key). Why is “muddle” a good thing?

Ray Dillinger August 8, 2022 9:57 AM

It bugs me that NIST is picking a winner when the initial results are demonstrating first that losers are so prevalent and second that it can take this long to show that they are losers.

Just as a statistical matter, we shouldn’t be assuming that we have come up with something secure until we have at least a couple of years during which most of the accepted proposals are apparently secure, and a couple more years when breaks among the remaining pool of candidates have stopped being found. In my estimation finding a break in a remaining candidate five years after the contest started should have postponed any possibility of announcing a winner until the remaining candidates withstand attacks and analysis for at least another five years.

That said there’s a point at which unattainable perfection becomes the enemy of achievable good, and if we’re really pressed for time on the quantum front we may not be able to wait that long.

But the prevalence and timeline of breaks here is a strong indication that we don’t understand what we’re doing well enough to standardize yet.

Also the notion that NIST is trustworthy is controversial at best. Given the astonishing level of government corruption we have seen worldwide in the last half-dozen years it’s really hard to assert that any agency primarily in the service of governments hasn’t been compromised.

Ray Dillinger August 8, 2022 10:04 AM

A thing I neglected to mention above is that the industry’s experience of ‘algorithm agility’ so far has been nothing short of nightmarish for security. Typically we’ve seen security reduced to the LEAST secure of the available algorithms in cipher negotiation. Before the break is generally known this is a zero-day attack; after the break is known it’s first a ‘deprecated’ cipher that remains in use for a decade and then a downgrade attack that remains in use for another decade.

We really don’t want to do that again! If we’re going to go down that garden path of ‘cipher agility’ again we need to first somehow unfuck our protocols for deciding what cipher is to actually be used in a given case.

Quantry August 8, 2022 12:12 PM

@ Al Sneed, @ TimH Re: DJB Lawsuit: That’s quite a rigorous indictment.

Definitely seems to contradict

“It’s a good process, mostly because NIST is both trusted and trustworthy.”

I’ve been favoring the likes of France’s “HYBRID” PQC Transistion (‘https://www.ssi.gouv.fr/en/publication/anssi-views-on-the-post-quantum-cryptography-transition/) mentioned in the indictment:

A hybrid mechanism (key establishment or signature) combines the computations of a recognized pre-quantum public key algorithm and an additional algorithm conjectured post-quantum secure. This makes the mechanism benefit both from the strong assurance on the resistance of the first algorithm against classical attackers and from the conjectured resistance of the second algorithm against quantum attackers…

…hybridation is a relatively simple construction

[by comparison, until the smoke clears. And far less of “lets trust nist”. ]

Clive Robinson August 8, 2022 12:42 PM

@ Bruce, Ray Dillinger, ALL,

“The moral is the need for cryptographic agility. It’s not enough to implement a single standard; it’s vital that our systems be able to easily swap in new algorithms when required. “

As I’ve been saying for most of this Century NIST needs to come up with a “framework standard” in which not just cryptographic algorithms but the modes algorithms they are used in, can quickly and easily be changed. And it needs to be built in to all crypto implementations especially on those “unseen” and “embedded” devices that have 25-50year or more expected life times like medical, industrial and infrastructure electronics.

As @Ray Dillinger notes above,

“… there’s a point at which unattainable perfection becomes the enemy of achievable good, and if we’re really pressed for time on the quantum front we may not be able to wait …”

Whilst true it has a flip side, in that cryptographic algorithms just don’t last as long as people or many other things do…

After all you do not want to have to make the choice beyween,

1, A malicious cracker making you “break dance on the side walk”
2, A surgeon to “crack your chest” to “patch the box”.

Because the 20 or 30 year old cryptographic algorithms protecting the “Over The Air”(OTA) interface on your pacemaker have been broken…

Nor do you want a malicious cracker making your “Smart Meter” change the way it reads and reports your energy usage so your bill quadruples or more. Because the 20-30 year old cryptographic algorithms it uses are broken.

Then there are “Industrial Control Systems”(ICS) that run manufacturing plant. Some of you will know that a few months back there were serious issues with “baby formular” in the US and there were injuries and deaths, followed by a recall then massive shortages. Whilst this time it was most likely due to “malicious negligence” by managment of putting profit over maintenance thus safety, the trend in ICS is to increase the use of non-wired communications with the equivalent of wireless local/mesh networks.

Thus cryptography will be needed to protect the systems not just now but for the entire liftime of these products. Many of which can be inplace for 25-50 years or more. For instance there are “lifts” still in use that are over 100 years old, yes they have been in most cases “upgraded” but the cost and difficulties involved is enormous.

Therefor these systems with “fixed cryptography” that are not easily possible to upgrade are going to make valuable prizes for the next wave of ransomware attackers, should the cryptographic algorithms or usage mode algorithms be deficient and they almost certainly will be.

Because the reality of cryptographic history shows that neither Cryptographig base algorithms or usage mode algorithms just don’t last… DES was seriously broken within 25years, various hash algorithms have been broken in a lot less. And people are talking about it being time to reassess AES as it to, like most other “block algorithms” is vulnerable to Quantum Computing. Resulting in AES’s effective keyspace, considered by those of a conserative view, as now being “way to small” and we should be looking at 512bits and up atleast.

Which is something else “Post Quantum Cryptography”(PQC) is teaching us,

“Key size is a serious problem”

Back with DES 56bits were inflicted by the NSA 40bits for export. That is considered laughable these days by all. But remember even back then 64bits was concidered inadiquate, by amongst others IBM. Remember it was IBM that actually designed DES with enforced NSA constraints. Prior to the NSA getting involved, IBM were developing 128bit cipher systems for commercial use. Most AES implementations even now are 128bit because the view back at design time was,

“128bits is ‘conveniently small’ for programmers and microcontrollers.”

Well QC will effectively make 128bit security today look the equivalent of 64bit… And that’s with the QC algorithms we so far have “thought up”…

One thing that is a given, is that when –not if– “General QC” becomes a practical reality, there will be a great deal of work on finding newer more efficient or faster algorithms for all sorts of issues. Thus it is highly likely any PQC cryptographic algorithms we come up with now, will get more specific QC attack algorithms developed for them.

So not only do we need a “Plug and Pray” Framework Standard, it needs to consider as a minimum key storage space up in the 1Mbit range.

SpaceLifeForm August 8, 2022 1:03 PM

@ Quantry

NSA Doesn’t like Hybrid

That stance may tell you something.

This is from 2021-09-02.

Note: There are pics of slides that will be difficult to read on a phone.

‘https://nitter.net/mjos_crypto/status/1433443198534361101

BCS August 8, 2022 3:08 PM

Has anyone tried to hedge bets on the math side of things by building a candidate on multiple “unrelated” hard problems?

Or has anyone explicitly built a candidate so that breaking it would require advancements that would be useful outside cryptography? (Sure they broke my candidate, but they also solved this hard problem so I can now build a better navigation app!)

Clive Robinson August 9, 2022 2:49 AM

@ BCS, Quantry, SpaceLifeForm, ALL,

Re : Hybrids

“Has anyone tried to hedge bets on the math side of things by building a candidate on multiple “unrelated” hard problems?”

The problem with these One Way Function (OWF) “hard problems” is that even in the easy direction they consume quite a few resources as do their “Trapdoors”.

Thus the NIST competition rules emphasizing both speed and efficiency tend to preclude any hybrids (and this is very probably via the NSA influance).

Worse it appears that actuall although OWFs are comon, Trapdoor OWFs are rare, very rare… Which is why so many of the entrants are effectively variations of the same OWF class.

We’ve seen this competition exploitation / finessing by the NSA before with the AES competition. Where the NSA encoraged through NIST an algorithm with significant implementation weaknesses with regards time based side channels to win (and why the NSA only approve AES for “data at rest” security in the likes of their IME).

To spot the “black hand” of the NSA influance on NIST and it’s competitions, you need to look for what is “not there” that should be there, and what is “not said” that should be said.

What DJB’s court case is about is part of the “what is not being said” issue.

In the case of the AES competition there were no tests for time based side channels included which the NSA were well aware they should have been (as were others who got ignored). Worse emphassis was deliberately placed on speed and minimum resources seperately, along with public release of the code. So to get speed things like “loop unrolling” were used as were “unbalanced branching” and similar, all of which made implementation side channels a certaity. Which with the public availability of this bad but fast code would get “cut-n-past” implementation into every software library. Which was unsurprisingly what happened…

So here we are getting on for a quater of a century later and there are many many bad AES implementations out there leaking secret information via time based side channels that can be seen in network timing[1]…

It’s why I tell people to do both encryption and decryption “off-line” by using two computers suitably “gapped”. The first is the “communications end point” and considered “insecure” the second is the “encryption end point” and needs to be secure. Information crosses the “gap” between the two in a way that breaks any time based side channels.

This way you are not fixing the time based side channel issue but you are mitigating it, along with several other issues.

[1] Remember for “covertness” the NSA tries to avoid attacking network “leaf nodes” where the “targets” can examin the equipment they use as it’s under their physical control. The NSA thus hide just upstream of the targets physical control and thus is not visable to the target. However the bad implementation of AES with the secret information leaking side channels is visable at the upstream nodes from the leaf node. So the NSA get to see the time based side channels, whilst the target can not see the NSA.

Clive Robinson August 9, 2022 4:32 AM

@ Sprewell, ALL,

Re : You can’t stop development.

Referes not to the “end result” –which may or may not be obtainable– but to the “process” of finding out.

The “leading edge” of research is often less than humorously called the “bleeding edge” in refrence to it’s rapidly increasing cost still showing no results. In fact something like 9/10ths of all research goes nowhere on average. However the amount of money people will risk on research depends very much on,

“The perceived value of the goal”

In the case of “Quantumn Computing”(QC) looking at it’s worth in established fields of endevor, outside of one or two very narrow fields it does not realy have any currently.

However the goal in currently established fields that is worth a prize worth risking “the farm” for is cryptanalysis. And that’s not just the likes of the NSA with “collect it all” policy. Imagine what the likes of Clodflare, Google and others who own backbone could do if they could go back to the pre “SSL For All” days as far as data collection goes?

But… What of the futrue? Not much is said about AI, for various reasons QC will also make significant improvments to AI on bulk data and the like. Likewise “Machine Learning”(ML). Both of which are very much in their infancy.

Getting QC and ML together would be a prize more valuable to most than being able to read everyones old traffic.

With such valuable prizes up for grabs then the statment,

“We can’t stop the development of quantum computing. Maybe the engineering challenges will turn out to be impossible, but it’s not the way to bet.”

Is very much a true explanation of what is currently happening.

My view point is that QC will happen, but that it’s chances of scalling depend on “noise removal”.

All computers are essentially based on analog processes. Original “analog computers” worked in the linear zone of electronics via what we now call Op-Amps which is short for “Operational Amplifiers”. The down sides of which are,

1, Gain
2, Offsets/bias
3, Non linearity
4, Metastability
5, Speed of opperation
6, Bandwidth
7, Noise

By limiting each amplifer to just a binary output with a large “dead zone” in the middle nearly all of these problems were solved. However you had to use many “bits in parallel” to get the “range” that a single anolog signal could give.

However with going to binary bits, you could just keep adding them in parallel as far as you desired[1].

The problem with “Quantum Bits”(Qbits) is that like an anolog computer the usefull range is in each bit, and we do not yet know how to get at more than a few binary bits equivalent out of them.

To be successful against cryptographic systems we are looking at a minimum of getting 10 to 20 times the number of bits accuracy out of that range than we currently get. There are three problems,

1, Instrumentation noise
2, Speed of reading
3, More than exponetial cost rise for each binary bit equivalent increase in resolution.

The first we are unlikely to solve with our current instrumentation methods (-174dBm/hz noise floor). Especially as “decoherence” of Qbits can happen in less than a millionth of a second. The third is not unexpected as we’ve seen it before with digital semiconductors.

As some point out these issues may be,

“Engineering not physical law limitations, so are in theory solvable”

Me I’m not so sure, unless we find a way to “Digitize Qbits” and obviously Q-algorithms that will work with them.

[1] Back in the 1980’s I worked on the design of a “Parallel CPU” for an image processing computer that was over 400bits wide for medical imaging, and as part of that developed independently a way to put multiple hard drives in parallel with error correction just to keep the beast fed with data.

Clive Robinson August 9, 2022 5:11 AM

@ Denton Scratch,

Re : Muddle is actually Confusion.

“I take it that “muddle” is code for reversible scrambling schemes, such as S-boxes.”

It’s a lot more than that. Claud Shannon called it “confusion” and it’s to do with the statistics of the message and something called “unicity distance” and it digs deep into the foundations of information theory.

A system with “perfect secracy” has a unicity distance as large as the message, no matter how long.

Most block ciphers unicity distance is down to around two block lengths.

From a practical perspective on a “Brut force search” you only need to do two blocks with each key to know you have the right key… Similar applies to stream ciphers where you only need check lengths of about two to three times the generator internal state array size.

The simplest “perfect secrecy” system we know of is the so called “One Time Pad”(OTP) and varients where the unicity distance is as long as the message.

The practical upshot is that with the OTP,

“Every Message is equiprobable”

Or should be… But in practice is not. Because the plaintext into the OTP is not equiprobable unless you take other “pre-processing steps” on the plaintext.

More simply, if all your plaintext is just english words, if as a cryptanalysist you know that, then with a brut force search on say a short message, any message that is not uniquely english words making sense can be eliminated, thus the equiprobability nolonger holds. Further english plaintext has it’s own unicity distance of around 3-5 letters, which much lightens the cryptanalytic load.

The way to get around this plaintext issue is to apply pre-encryption, pre-coding, and importantly compression to the plaintext to “flatten it’s statistics”.

Which is why it’s vital to atleast “double encrypt” all Microsoft File formats with two entirely unrelated encryption algorithms. As well as use “Russian Coupling”.

Oh one thing about the OTP that is rarely mentioned is “deniability” under second party betrayal.

That is as the first party you send a message which is OTP encrypted then the resulting ciphertext –not the plain text– is “armoured” against transmission errors as well as authenticating the sender. With traditional encryption there can be no deniability if the second party reveals the message key.

However with OTP you can just as easily produce a different key, that produces an entirely different plaintext of your chosing. Thus providing your other actions are circumspect there is no evidence as to which of the two messages are valid. Thus the second party who betrays the first party sender, is actually only betraying themselves.

SpaceLifeForm August 9, 2022 5:55 AM

@ ALL

re: NSA does not want PQ-Hybrid

‘https://mailarchive.ietf.org/arch/msg/cfrg/YBHlOm1YUCFZDTyzof76IKTl0oY/

Clive Robinson August 9, 2022 7:48 AM

@ SpaceLifeForm, ALL,

Re: NSA does not want PQ-Hybrid

It’s rather more than “hedging bets” to think about.

We know there are as far as “Quantum Computing”(QC) only some types of crypto that will be,

1, Fully vulnerable
2, Semi vulnardle
3, Not vulnarable.

Those based on current “Trapdoored Onway Function”(T-OWF) using a Trapdoor based on the mathmatics of factoring or logerithms, are fully vulnerable.

Those such as block and stream ciphers that use other types of “One Way Function”(OWF) are semi vulnerable, and in effect need much larger effective key spaces (~double the number of bits).

Then there are those that have “perfect secrecy” such as the One Time Pad that are not effected by currebtly known QC algorithms.

Thus the first thing we logically should do is deal with the second group and double the effective number of key bits. Especially as this is something we know how to do.

Secondly we need to identify all the functionality the fully vulnerable group gives us and start working on how to get the same or similar functionality without T-OWFs as new T-OWFs appear to be not just vary scarce, but if thought to be secure against currently known “Quantum Algorithms”(QAlgs) are showing themselves as vulnerable to “Clasical Algorithms”(CAlgs).

Thus we may find that all T-OWFs are vulnerable to

“One or the other, or both QAlgs and CAlgs”

In which case our only option is to find non T-OWF ways of doing the extra functionality that T-OWFs have given us so far.

Which is why I mentioned a few days back my prefrence for looking at “Quantum Key distribution”(QKD) aranged in a shared secret method to not just considerably extend the range of individual QKD systems, but also resolve the “switching” issue (QKD is effectively single point to point and can not be switched currently). Thus alow the secure transmission of a One Time Pad to both parties who can then share a root of trust securely, at which point current non T-OWF crypto algorithms can be used.

We probably have the time to do this “mitigation” where as I suspect any PQC we come up with, will even if secure against current QAlgs and CAlgs will fairly quickly get found not to be as our understanding of maths etc improves.

But with regards “Hybrid” I’ve always recommend chaining current cryptographic algorithms as a standard defensive measure. Providing the basic structure and functionality are different, breaking one will probably not break the other.

I’ve also recomended that people look at producing true amalgamated hybrids. Think if you want an example of AES split into two parts. The AES rounds structure and the AES structure that generates the rounds keys from the master key. Think about replacing the latter rounds key generation structure with a secure stream key generator.

Therefore for PQC hybrids you need to consider firstly,

1, Chain or true hybrid.

Which has two further options assuming one algorithm is PQC secure, what do you chose for the second algorithm,

1, Curent QC semi vulnerable but currently secure algorithm.
2, Another PQC algorithm.

My prefrence due to speed of change would be,

“Chain PQC with an increased key size current algorithm.”

That way you can quickly and relatively painlessly upgrade the system as and when newer more secure algorithms are required.

I’d also “re-open the book” on some of the other AES competition finalists with a view to increasing their key size and other security parameters such as implementations that have side channel reduction built in.

fib August 9, 2022 8:33 AM

@ All

Re “cryptography is a mixture of mathematics and muddle”

Muddle is obfuscation by other name, right? But I’ve been told that obfuscation is not the best of practices – and ineffective anyway.

I admit that I’m always looking for ways to add some obfuscation whenever possibel – it gives me a heightened sense of security. So have I been doing the right thing all along?

Clive Robinson August 9, 2022 9:29 AM

@ fib,

Muddle is obfuscation by other name, right?

Obfuscation, does not have an information theoretical meaning.

Muddle in this usage case[1] is the same as Claude Shannon’s “confusion” and it does have an information theoretic meaning.

https://en.m.wikipedia.org/wiki/Confusion_and_diffusion

[1] In the original Ian Cassels usage it covered both confusion and diffusion.

Clive Robinson August 9, 2022 10:07 AM

@ ALL,

For those that do not know, Ian Cassels was actually John William Scott Cassels. With his father being John William Cassels. Back in the period between the two world wars it was common for this to happen especially in Scotish families[1] and it was likewise common for the child to have a sort of short nick name[2].

Any way his connection to cryptography was a bit more than doing new things at Bletchly Park. He also did early work on eliptic curves that later gave us a new form of asymetric cryptography.

He died just a few years ago and you can read his obituary,

https://mathshistory.st-andrews.ac.uk/Biographies/Cassels/

[1] If you read Terry Pratchett books the “Wee Free Men” will give you an idea of the problems with this.

[2] For my misfortunes being half Scottish, I was likewise given what was in effect a “family” or “handdown” name. So at a family gathering if you called it out many heads would look up… Worse it also got used in an advert on TV when I was very young and the teasing at school stoped me using it… Especially as some I had the misfortune to go to school with appeared incapable of saying certain letters like H, T, and R, something known as “One of those” by teachers of the time implying the object of the comment were more common than muck (as in ‘Cockney mudlarks'[3]).

[3] During WWII many children got moved out of the East End of London for their own safety. And this led to re-housing not just of children but whole familes. Especially after the War as they started an almost “slash and burn” methodology to the East End new housing was built in “New Towns” and local councils in the area also purchased land outside of London on which they built housing estates and moved hundreds of familes at a time. Understandably it caused not just friction in the area but other problems like ridiculously large class sizes where disruption was so frequent it amazes me that anyone managed to learn anything at all.

David in Toronto August 9, 2022 10:31 AM

@clive English and Scottish nicknames can get very confusing even for other English speakers. There was a “Knobby” Clarke at Bletchley and a namesake in “The Ministry of Ungentlemenly Warfare”. Apparently a common Clarke nickname. Imagine my surprise after thinking this clever guy really got around. Even in Canada we see it sometimes but rarely. I know a family of “Swotty’s”. Why anyone does this to their kids befuddles me.

Ray Dillinger August 9, 2022 12:09 PM

One of the things I researched for a college class long ago was the feasibility of extending block algorithms to reach unicity distance equal to message size.

Before going further I should clarify a point; I was using Friedman’s definition of Unicity distance (the number of different messages that something could be decrypted to is 2 to the power of the sum of key size and the number of message bits unknown) rather than Shannon’s (all ciphertexts of a given length are equiprobable). The difference is that Shannon’s Unicity can be the same as the message size only if the key is also equal to the message size whereas Friedman’s is defined for a general key size.

This is possible to achieve by repeated rounds in which each round is the application of a conventional block cipher in ECB mode, followed by a bitwise transposition cipher to create the blocks of the next round, in which each bit depends on an equal (and maximal) number of the original plaintext blocks.

The end result (assuming that underlying block cipher achieves good diffusion) is that EVERY bit of the ciphertext depends equally on EVERY bit of the plaintext. This achieves Friedman’s Unicity.

It is not particularly practical to apply to streams etc, and slicing and dicing the blocks bitwise between each round occupies more compute horsepower than applying the cipher block wise. And if the message is large it will definitely blow cache and use off-chip memory. Finally the total cost of the construction is proportional to the log base 2 of the message size minus log base 2 of the block size.

That said, it makes the cost of a brute force conventional attack proportional to 2 raised to the power of the message size, rather than 2 raised to the power of the block or key size. And if I understand the math correctly, it makes the cost of a quantum attack proportional to sqrt(2) raised to power of message size.

All this dry math comes to the following: If you append a 512-bit nonce to the message, it’s the same cost to a brute-force (or quantum) attack as if you’d used a key 512 bits longer. If you want to secure something tremendously sensitive using a 40-bit key, you can do it. If you want the same security as doubling the key size, you can just add a nonce the same size as the key.

Excessive costs and all, I’d be entirely happy using this kind of construction against quantum attacks. It would plug the hole, anyway, for symmetric ciphers. Asymmetric or public-key ciphers, I’m less happy about.

SpaceLifeForm August 9, 2022 2:13 PM

@ ALL

Silicon Turtles

It likely does not matter what NIST concludes.

‘https://arstechnica.com/information-technology/2022/08/architectural-bug-in-some-intel-cpus-is-more-bad-news-for-sgx-users/

The researchers’ proof-of-concept exploit, available here, is able to obtain a 128-bit AES decryption key on average in 1.35 seconds, with a success rate of 94 percent. The exploit can extract a 1024-bit RSA key on average in 81 seconds with a success rate of 74 percent.

David in Toronto August 9, 2022 4:13 PM

@SpaceLifeForm

Presumablbly you need to be on the box to exploit this. It is not something you will be exploiting from just beyond the network perimiter. The distinction is necessary for proper risk assessment.

If you’re really concerned you’ll be using something like an HSM and not doing your crypto on the box.

Mind you if I can get on your box, does it really matter? I can likely see the plain text anyway!

Clive Robinson August 9, 2022 4:16 PM

@ SpaceLifeForm,

Re : Silcon Turtles,

Before even looking at the article my first thought was a new varient on,

“The Xmas gift that keeps giving”

Along with,

“Their track record says they won’t fix, or they’ll make it worse.”

But imagine my shock on reading it…

A simple memory read is all that is required…

In essence the problem is as easy to understand as not clearing a buffer created by malloc() before you call free()…

And I’ve been warning about Intel and it’s IO memory managment for some time…

So yes new, and jaw dropping but honestly not ubexpected.

And Intels solution… Is not going to work with existing software that you can not alter…

So “trebbles all round” in the commercial software market…

Clive Robinson August 9, 2022 4:34 PM

@ Ray Dillinger,

Re Unicity distance.

Shannons “all messages equally probable” definition by inference gives the usefull first party “deniability”.

I’m going to have to think about it but I’m not sure Friedman’s definition does.

(if you hear a strange whirring noise “off stage left” that will be the cogwheels spining up in this mini heat wave we are having in London 😉

Clive Robinson August 9, 2022 4:47 PM

@ David in Toronto,

“Why anyone does this to their kids befuddles me.”

Remember in many cultures “tradition” includes mutilation of the body in various ways…

I won’t go into details because well the comment would be yanked.

But lets say tradition ranges from tattoos, through filing of canine teeth upwards to things you probably could not or would not wish to imagine…

Thatcis the strength of wanting to be part of a tribe…

Oh and remember some religions have death as a penalty for leaving the faith…

All of which actually shocks me to a very fundamental level. My view being,

“Just how hard is it to actually be nice to people, and encorage them to do the best for themselves?”

Apparently it’s not a popular sentiment in the echelons of the self anointed.

lurker August 9, 2022 7:49 PM

@SpaceLifeForm

Æpic Fail. Why should I trust SGX anymore than I trust Posix permissions? Again I’m not the target demographic, because I don’t mind waiting to read my key from disk each time. Yeah, sure, disk drivers, firmware, mumble, mumble …

SpaceLifeForm August 10, 2022 6:31 AM

@ ALL

Silicon Turtles

A different species named SQUIP

Note: this is on AMD and mitigated by disabling SMT which you should have done already because of Spectre and Meltdown.

See this link about the confusion regarding SMT if you are unfamiliar.

‘https://utcc.utoronto.ca/~cks/space/blog/tech/SMTSecurityUncertainty?showcomments

My Bold added:

‘https://www.theregister.com/2022/08/09/intel_sunny_cove/

Intel chips are unaffected by the SQUIP attack because they rely on a single scheduler queue, the SQUIP paper explains. However, AMD Zen 1 (not mentioned by the researchers but confirmed by AMD), Zen 2, and Zen 3 microarchitectures implement separate scheduler queues per execution unit, so contention between different units can be exploited to glean information.

The attack – which can determine an RSA-4096 key in about 38 minutes – assumes the attacker and target are co-located on different SMT threads of the same physical core, but are from different security domains. In short, it’s relevant mainly for cloud tenants relying on shared hardware.

Some squip snips from the 17 page PDF.

‘https://regmedia.co.uk/2022/08/08/squip_paper.pdf

We highlight that our covert channel is among the fastest of them, as it does not require complex eviction strategies or memory accesses (only requires low latency ALU instructions) and produces a low-noise signal. One limitation of our channel is that it only works across SMT threads and not cross-core, unlike some prior covert channels.

We were able to transmit 0.89 Mbit/s across virtual machines at an error rate below 0.8 %, and 2.70 Mbit/s across processes at an error rate below 0.8 %. In our full side-channel attack on an mbedTLS RSA signature process, we can recover the full RSA-4096 key with only 50 500 traces and less than 5 to 18 bit errors on average across processes and virtual machines. Our work highlights that pipelines with multiple scheduler queues have to be reevaluated for security and future CPUs need mitigations to prevent our attack.

anon August 10, 2022 7:03 AM

I think too much emphasis is being placed on the NSAs influence on NIST because I think its likely that ‘former’ NSA employees who now work in the industry are an equally credible threat due to actually still being on the NSA payroll.

SpaceLifeForm August 10, 2022 7:49 AM

@ anon

re: influence and insiders

I would not approach your point, which, while valid, as a mutually exclusive proposition. Maybe this is why NIST PQC process is taking so long is because NIST does not want to become a scapegoat again.

MrC August 10, 2022 8:36 AM

My takeaway from the surprise “demolition” of so many late-round candidates is these relatively young one-way problems simply haven’t been studied enough that we can have confidence in them. Accordingly, the safest bets are probably McEliece for asymmetric encryption (which has been around since the 70s, although receiving less attention than RSA, DH, and ECC), and the hash-based signature stuff (which is secure if the underlying hash function is). Also, it’s probable wise to chain McEliece with RSA, despite the awful bandwidth and runtime costs of that pairing.

Clive Robinson August 10, 2022 8:40 AM

@ lurker, SpaceLifeForm, ALL,

“Why should I trust SGX anymore than I trust Posix permissions?”

You should not trust either, because ultimately they will both fail you as will all technology you do not directly control to the exclusion of all others.

Simple rule is if control/operation is shared then any such system is at best split trust thus by definition not trustworthy.

But as you note,

“I’m not the target demographic”

Are you sure on that?

Currently you might think you are not, but you actually are. Not because you are somehow important but because you represent a source of money / profit.

But you might be more important one day[1]… None of us know when someone might decide to target us or why. We just have to accept it’s happening and will continue to happen.

So these days it’s wise to take sensible precautions. Which raises the question of what “sensible” means[2].

The only precautions I generally advise is “segregation and issolation”

That is be like Janus and have two faces, the private and the public. Keep two systems one private and issolated and one public that you try to keep segregated as best you can.

[1] Classic example of this was a “student” at a party gets photographed smoking something they should not have been. Nobody cared at the time and life went on. A quater century later the student is now a moralising drum beating politician, and suddenly that photograph is worth serious money to journalists, who “pay and display” and that student now politician gets some real nasty press to deal with.

[2] Back in the 1980’s and 90’s having seen what could be done by others, I realised I could do a lot lot worse without that much effort. So I realised there was a certain truth in both sides of,

“Do unto Others as you would have them do unto you.”

The flip side being what ever you could do to others you should expect others to try to do to you… Which quickly brought up the thought,

“How do I stop me, or worse than me?”

And without going through all the steps, it boils down to,

“If they can reach it, they can and probably will attack it.”

To which the only real response is

“Live with it, or issolate it, otherwise do not partake.”

I chose not to “live with it” so my systems are not just issolated, I do not indulge in typical “social media” and the like…

Emoya August 10, 2022 10:45 AM

@Clive, All

Unless the current path of “progress” is significantly altered, I fear that non-participation will quickly become an impossibility, as has already occurred in many cases.

For example, I would prefer not to carry any cellular device, much less a smartphone. For years I resisted the societal push to carry any form of mobile device to preserve privacy and personal boundaries but was eventually required to tote a pager during the era of flip phones, then a cell phone, and now a smartphone. Unfortunately, this has become a requirement for me, and the vast majority of others, to be professionally effective.

Likewise, I must have a LinkedIn profile, and eventually caved and set up a minimal Farcebook (which has not been updated since creation), just to prevent impersonation attempts. I believe that Bruce has personally had to deal with this.

I also loath the use of biometrics for identity/authentication for various reasons.

The evolution of technology is forcing everyone to choose between participation or exclusion, in totality. Likely within what remains of many of our lifetimes, there will no longer be any middle ground. We are all being forced to trust in technologies and people we otherwise would not.

Therefore, as you expressed, we are ALL in the “target demographic”, whether we realize/wish it or not, as it is being, and will continue to be, imposed upon us all.

Hopefully, enough qualified people are participating in security and privacy-related efforts, such as PQC, who also have the desire to preserve or even advance what little we still have, as well as the influence to see it realized.

Clive Robinson August 10, 2022 8:10 PM

@ Emoya, ALL,

Re : Have to be connected.

“Unless the current path of “progress” is significantly altered, I fear that non-participation will quickly become an impossibility, as has already occurred in many cases.”

“Progress” simply means to move in a chosen direction, so actually means little.

The two main things that actualy mean something are,

1, The destination
2, The journey to it.

My personal view is I don’t want to go to the destination (total loss of privacy for the majority[1]). Or ride that journy (Politicians spouting at best fat far distant corner cases as “bogeymen around every corner and in every childs bedroom”[5]).

So where I’m legaly able to, I disent, and do not play or participate in the madness and mass delusion society appears to have become[2].

I know this might look like I’m the mad one, but I’m simply old enough to have become a responsible adult befor this “Emperors Clothes Delusion” existed (and for my sins helped develop… In the 1980’s and onwards).

Hard as it might be for some to consider but the “World Wide Web”(W3) did not start untill the late 1990’s, and W3 did not realy get going on mobile phones untill 2002 (what went before was a real lash up). Smart Phones did not realy get to be as we would recognise them untill the later half of the “naughties”.

That is there has been a major change in the afluent 1st World in only around a decade and a half. Oddly for some to consider is a number of 2nd and 3rd World nations jumped straight in to mobile phones and mobile broadband. The reason is basical the “build out cost” of radio based infrastructure is though costly, a small fraction of what putting wired or fiber infrastructure in would be, and it’s less prone to theft and the like (something some 1st World nations are realising as copper cable gets bolt cutter choped and draged out of the ground with vehicles that are more associated with towing than cable theft, and even the cast iron manhole covers are getting stolen).

But radio based systems have other issues that cables in the ground do not. Firstly there is a spectrum usage issue. Put simply you only get about 1.4bits/second of information per Hz of bandwidth on average. The range is based on the inverse square law, where using four times the energy per bit only doubles your range. But any range you have, you deny to other users. So to meet the demands of users the size of mobile broadband cells is decreasing rapidly. Some 5G and probably most 6G full bandwidth services cell size will be little more in radius than the distance between two street light poles. Which is why there are plans to build cell Nano-Cells into the “lamp units” on top of street lighting poles and people having home based Pico-Cells going down to room based Femto-Cells, as the upwards usage of spectrum approaches a sizable fraction ~30% of the “Tremendously High Frequency”(THF) Terahertz frequencies 300GHz-3THz. THF signals have enough trouble getting through humid air and just don’t go through even window glass and curtains let alone what is used for building walls these days.

But unknown to many the receivers that work at THF and above are very susceptible to fast transient high power –but not high energy– pulses. Even not to close –less than 2km– lightening strikes can take out the receiver front ends, man made EMP and certain types of “space weather” will destroy them over very large distances (European nation to continent size).

In North America, especially Canada and parts of the US have suffered not just radio black outs, but significan power outages due to adverse terestrial and space weather on poorly designed and maintained infrastructure.

So we realy should consider Smart Devices to be even more vulnerable… So all that is built on mobile broadband could disapear in minutes and be gone for months if not years…

The recent pandemic and lockdown and interference with supply chains should have been not just a wakeup call but a massive “red flag warning” about just how fragile our 1st World economy is, much of which is now entirely dependent on Smart Tech using what is increasingly fragile radio communications…

People need to ask the question of,

“What are young city and urban dwellers going to do when power, water, sewage and mobile go out at the same time, and shops have no working tills to make sales and no way to restock anyway?”

I’m old enough to know how to live without such things and can sort of survive for a month or so because of my “hobbies”.

But by the smell of it, I suspect some of my more distant neighbors could not survive even a day or two tops without their “herbal smokes” supplier. Who will of course disapear at the first sign of trouble.

So for society to survive, we will have to learn to say “NO” to that tiny percentage that want to screw down on society to squeeze out what they can any which way they can. But we all have to be proactive in this[4].

I could quote the “tree of liberty” saying, but apparently that is now considered seditious to say[5]. So much for history teaching us…

[1] As I’ve mentioned before history shows that, what we used to know as society, can not exist without privacy.

Privacy has many important sociological factors, but from a 20,000ft view it acts as the societal “Safety Valve” that stops society blowing up, but also negates the need to have society represively screwed down tight in a pressure container of “Guard Labour”.

For society to move forward it has to be able to have room to manover. This means alternative views, opinions and behaviours within certain agreed limits needs to be alowed even if they do turn out to be wrong.

Although not immediately obvious society is moving all the time, and even though it might look like the sweep of a pendulum changing our morals back and forth is constant it is not. The center “tracking line” line is also moving as well and we call this movment “the mores of society”. Which in turn become the regulations and legislation that codify what is and is not acceptable in society. The original guiding principle of democracy was that all would be able to decide, which is very clearly nolonger the case[2].

[2] Unfortunately society has alowed it’s self to be captured by a very tiny few, who amongst other things now dictate differential legislation that benifit them enormously at the expense of the rest of society. These few also understand some of the lessons of history, which means they are making changes by stealth, and slowely instilling effectively irreversable change for the worse of society. Something that has become known as,

“Boiling the frog”.

The problem is as it’s a slow process many do not see it[3].

[3] Much of younger society in general is to busy looking at smart device screens. They appear to not know or perhaps not care, that less than a quater lifetime ago none of it existed. Thus to other older eyes they appear not to realise they have,

“Sleep walked into a guilded cage.”

Where the guilding in reality is like,

“The Emperors new clothes.”

A self delusion that can be snatched away entirely on the word of just one individual unseen and unknown to them in a crowd they know nothing about.

[4] Lest we alow our rights and much else we think society gives us to be silently and irrevocably stripped away.

[5] The Bogeyman, is something made up on remote possabilities, used primarily as a way to scare individuals into compliance in some way.

Yes there are “bad people” out there, about 1 in 1000 in WASP Nations are in jail at any one time and they are maybe 1 in 10 of those in the entire population. But “bad” is not “dangerous” much though politicians would like you to think that way. In the UK for instance we have locked up women and taken away their children because they could not aford to pay some “corporate levied” tax that others know how to avoid… Who in that case is the “Bad Person”? Thus who realy should be turned into the “Bogeyman”?

https://en.wikipedia.org/wiki/Bogeyman

JT August 11, 2022 3:04 AM

“it can take many years to update them: in the transition from DES to AES”

This almost was lost in the post – the cost of upgrading is incalculably large, and the resistance from organisations is incredible. We still have every-day life/finance critical systems based upon 3DES, because the cost to move off DES was too high, but running it through the same circuit/codeblock twice more was cheaper.

We (As a supplier to financial institutions) upgraded our systems to TLS 1.1, 1.2 and even had prototype 1.3 available at the time, it came time to turn off 1.0, and we had to because the PCI standard mandated it – customers, that is large banks, objected to this because it broke their code. They couldn’t handle 1.1 let alone 1.2 TLS, and asked for extensions (which we rather ungraciously gave them)

We had wrongly assumed they had code which used the best also available, not realising they actually had to make code changes to upgrade, and the cost to them was immense!

Dave M August 30, 2022 1:55 PM

What are you thoughts on the buy and hold (long) strategy of crypto assets (bitcoin and others). I love the long term vision of crypto assets, but unsure if/how quantum computing may impact their long term potential.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.