Intentional Flaw in GPRS Encryption Algorithm GEA-1

General Packet Radio Service (GPRS) is a mobile data standard that was widely used in the early 2000s. The first encryption algorithm for that standard was GEA-1, a stream cipher built on three linear-feedback shift registers and a non-linear combining function. Although the algorithm has a 64-bit key, the effective key length is only 40 bits, due to “an exceptional interaction of the deployed LFSRs and the key initialization, which is highly unlikely to occur by chance.”

GEA-1 was designed by the European Telecommunications Standards Institute in 1998. ETSI was—and maybe still is—under the auspices of SOGIS: the Senior Officials Group, Information Systems Security. That’s basically the intelligence agencies of the EU countries.

Details are in the paper: “Cryptanalysis of the GPRS Encryption Algorithms GEA-1 and GEA-2.” GEA-2 does not have the same flaw, although the researchers found a practical attack with enough keystream.

Hacker News thread.

EDITED TO ADD (6/18): News article.

Posted on June 17, 2021 at 1:51 PM77 Comments

Comments

Dillweed June 17, 2021 2:39 PM

Own the Net is a mantra by most countries that do monitoring and what sap thinks that VPN and encryption will really protect you…..

SpaceLifeForm June 17, 2021 3:33 PM

Todays Special: Stingray Baked Beans cooked at 2G. You can smell them cooking a mile away! Spicy!

echo June 17, 2021 3:58 PM

Effective 40 bit keys reminds me of the DES fiasco. I know nothing of the maths but I suppose it may be an interesting exercise for some to reverse engineer what mathematical skills and knowledge some people would need to push this through. I’m guessing that may be wildly entertaining for some and more interesting than the standard itself. What might this suggest of people’s abilities today? I have no idea.

The office politics and politics comments in the Kacker News thread are interesting as they note governance and human rights issues which itself is a wider security context.

MarkH June 17, 2021 11:22 PM

@echo:

An interesting difference between DES and GEA-1, is the degree of concealment.

The downgrading of DES was done with some smokescreen about “parity bits,” which didn’t deceive academic cryptographers. They knew it had been deliberately weakened, and some yelled about it in public.

By contrast, the weakening of the stream cipher was done subtly, and I suppose required some serious expertise to carry out.

Although the technique probably hasn’t been studied as much as it deserves, there’s good reason to expect that nonlinear combinations of LFSR outputs can be used to make a stream cipher which is highly resistant to cryptanalysis. At face value, GEA-1 looked solid.

This kind of tricky “backdooring” is what keeps some folks awake at night, wondering what and whom they can trust.

SpaceLifeForm June 17, 2021 11:40 PM

@ MarkH

Is there really any reason to trust a stream cipher to not leak?

Especially, with a sufficient amount of traffic.

I guess what I am saying, is there really any proof that stream ciphers are truly secure and can not leak?

Clive Robinson June 18, 2021 12:24 AM

@ ALL,

If you are going to read the paper start with §5 Discussion it will make life easier if your domain knowledge on LFSRs and Waslsh Functions is limited.

In section five you will find qialification of the “Bob Morris Retirment Observation” about “Known plaintext” which when you see how little is needed and how sparse it can be might teach you a lesson about why Microsoft File formats were the way they were in the late 1980’s onvards, also why EXE fole format likewise is bad news.

But whilst the paper dances around the issue, like the more well known A5/1 and A5/2 debacle the GEA-1 and GEA-2 issues are the result of France’s Crypto Policy. Which in turn is based on France’s Security Services spying not on Criminals and Terrorosts that might get some “public support”, but foreign Companies which would not. France has done this as “policy” going back at least as far as the end of WWII to supply “competitive advantage” to the “Chosen Few” French companies, that various French politicians had “intetrsts in” if not directly then through family (look back over various corruption scandals to see some of them mapped out).

France very deliberately manipulated the communications standards to enable easy corruption at very senior levels of the French Government which should have produced European wide if not world wide condemnation. It did not which should speak volumes to people about what goes on in other European countries especially the UK as you can see GCHQ’s grubby fingerprints all over the A5 and GRA algorithms[1].

But from a more technical perspective, I point out from time to time that two of the issues of crypto standards that should realy wake people up are,

1, They all have a short operational shelf life of less than a quater of a century.

2, They get “baked in” to hardware the operational shelf life of which is greater than a quayer of a century and could be more than half a century.

This should realy cause people to stop and think about all these “Crypto Contests” run by Standards bodies that are “advised” by Security Service Agencies.

But the Security Services know as I do and have said systems are upgraded by augmenting ONLY. That is dangerous legacy algorithms remain available and can be accessed by the equivalent of “man in the middle” techniques that cause “Fall Back Attacks” to be easily and usually invisably to users forced into the weakest security. This is supppsadly for “legacy compatability / interoperation” but that is just a well worn excuse as part of “finessing”.

If you look at A5/1, A5/2 and GRA-1, GEA-2 you can clearly see that the “second” supposedly more secure algorithms are intimately tied to the “first” very weak security algorithms, thus you do not get one without the other…

That is the Security Services know full well that designing a new set of hardware / firmware to remove the weak algorithm will not happen even if required by a “Standards Authority” at best the manufactures will put a “high level software block” in that will get it past “Testing” but the security services know such a block can be trivially removed by a simple Over The Air(OTA) update or hack via an application or OS bypass.

People should remember this when they hear about the clumsy bumblings of Law Enforcment who attack encrypted phones and applications and even set up fake supplies of such systems.

That is all those “secure” Smart devices have multiple vulnerabilities already “finessed in” by the security services who infest standards bodies before Law Enforcment slap another layer of insecurities on top of that…

It’s why I advise not just clients but everybody to “Move the Security End Point well beyond the Communications End Point”. Because if you don’t then “You have no security”. Thus to be secure you have to forgo the “conveniance” of single device operating, because all “Smart Devices” and any other “consumer electronics with communications capability” are riddled with easy “end run attacks” around any high grade algorithms or other security you care to put on them. That is “No Consumer Electronics Device can be secure ever”…

So nor can any OS or Apps upon Consumer Electronics no matter what they claim.

To be secure with consumer electronic devices you have two clear, but stark choices,

1, Take the security end point off of the device.
2, Stop the communications end point on the device being available.

Preferably do both.

Look back on this blog where I’ve described how to do both at length in the past.

[1] If you are wondering why they are “stream ciphers” not “block ciphers” you will get told something that whilst true is actually a lie hidden in plain sight. You will be told they have advantages in commubications because of XXX where XXX is a whole bunch of reasons. But what is being hidden is that whilst some European Security Services intimately understand “Stream Ciphers” the various Open and academic comunities almost always go with block ciphers. Which can fairly easily be turned into stream ciphers algorithmically, but the result is very inefficient. So the security services have a nice big wide playing field that they effectively control and can get away with just about anything they want to in the way of security weakening. Because outside of their closed world few have any understanding or depth in the knowledge domain of stream ciphers… If you doubt this go have a look at the NESSIE crypto competition history.

None of the six stream ciphers submitted to NESSIE were selected because every one fell to cryptanalysis. This surprising result led to the eSTREAM project.

https://en.m.wikipedia.org/wiki/NESSIE

To be honest don’t hold your breath on eSTREAM either, there were over thirty entrants and less than one fifth were not found to have major failings. Of those that survive, I realy do not expect any of them to last a couple of decades untill they are broken. Especially those that would be attractive for long life products such as Smart Meters and Implantable Medical Electronics…

Clive Robinson June 18, 2021 1:42 AM

@ SpaceLifeForm, MarkH,

I guess what I am saying, is there really any proof that stream ciphers are truly secure and can not leak?

Empirical evidence of non secret stream ciphers suggests very much not.

Now lets take a 20,000ft view as a start to a theoretical view,

1) A stream cipher generator has no changing random element to it.

2) A stream cipher “key” is just an offset, that changes the starting point of the generator not how the generator produces output.

3) A stream cipher has only three parts,

3.1) A counter.
3.2) A mapping function[1].
3.3) A linear mixer function.

4) The only random element is in the plaintext and that is usually highly predictable with strong easily recognisable statistics.

So the only real security comes from that mapping function…

The problem is to get the level of security required means that the mapping function has to be highly non linear it has to do it over a range of four times or more the number of bit of an equivalent block cipher[2]. So AES 256 equivalent requires a stream cipher with a counter of atleast 1024bits.

But… You should consider the maximum run length issue as well. With a stream cipher it boils down to 3W-2 that is think of the all zero state output, the output before has to differ by atleast one bit likewise the next output. So the maximum run length of zeros over those three outputs would be,

100…000,000…000,000…001

So realy you should be thinking of 12 times the block cipher key bit width. Or for AES-256 equivalent 3072 bit counter and map width. As the mapping function has to be complex needing a very very high “gate equivalence” complexity you can quickly see how the effective gate count in the mapping function would be immense.

So yes you would expect stream ciphers to not be very good security wise from the theoretical point of view as well.

Yes there are tricks to get a non-linear complexty with less gates but you then need to consider how you pick the points on the linear Walsh Map to give you that non-linear complexity with reduced gate count and that can be hard very hard and it’s not a well studied area in the Open community.

Also because of the “known plaintext” issue you should consider two other things,

1, A very nonlinear mixing function of large alphabet size.

2, Use the stream cipher in a nonlinear feedback or feedforward mode.

By which time any perceived advantages of a stream cipher over a block cipher have diminished astoundingly or compleatly vanished.

And that’s all before we have left the 20,000ft view point to start getting into actual theory.

[1] Any binary maping linear or not can be replaced withba Walsh or Sequency map of XOR gates. The simple example you are most likely to come across is the “Grey Code” map where there is one XOR gate for every bit of map width configured in a cascade.

https://en.m.wikipedia.org/wiki/Gray_code

However the highly cyclic nature of such simple Grey codes as can be seen on the Wikipedia page kind of smacks you in the eye… Thus the lack of complexity is painfully obvious.

[2] You can kind of see this from, a block cipher has a bit width input of “data bit width pluss key bit width” then the fact that the mixing effect of data and key effects the entire maping width.

SpaceLifeForm June 18, 2021 2:04 AM

Another link

hxtps://www.vice.com/en/article/4avnan/bombshell-report-finds-phone-network-encryption-was-deliberately-weakened

MarkH June 18, 2021 3:30 AM

@SpaceLifeForm:

It seems to me that “proof” is not a very useful concept with respect to security.

Of course, systems are often proven insecure by some break … but formal proofs of security are typically so limited that they tell little or nothing about security in a practical context.

The best evidence that a cipher is strong is when intensive cryptanalysis efforts have failed to find feasible attacks.

Clive Robinson June 18, 2021 4:05 AM

@ MarkH, SpaceLifeForm,

The best evidence that a cipher is strong is when intensive cryptanalysis efforts have failed to find feasible attacks.

Well I find the best first step is,

Show me the Random

At the end of the day the main difference between secure and insecure in a crypto algorithm is the level of “predictability”.

Sufficient True Random in the algorithm says “no predictability” there. Which leaves the plaintext statistics, which you have to not just flatten but mix up either true randomly or in a message dependebt way. To stop them being reproduced by the same message under different keys.

So you look at the mixing function for flatening and the mode you use the crypro algorithm in to spread them unpredictably far and wide.

If it looks like the system is not going to give you these it’s probably easiest to move onto the next algorithm…

Bob Paddock June 18, 2021 7:47 AM

@Clive Robinson

“Sufficient True Random in the algorithm says ‘no predictability’ there.”

Clive I asked NXP “how does the TRNG of the Kinetis parts work?”. They replied thus:

“TRNG is based on collecting bits from a random noise source. This random noise source is a ring oscillator that is sensitive to random noise (temperature variations, voltage variations, cross-talk and other random noise) within the device in which the TRNG is used. TRNG can be used to seed a hardware or software based implementation of a DRBG defined by SP800-90.

TRNG hasn’t been certified with FIPS and there are no plans to do it for KINETIS.

But, the TRNG is designed in accordance with SP800-90B guidelines, if you combine it with a SW DRBG (per SP800-90B guidelines), it should meet the FIPS requirements.”

What are your thoughts?

Won’t any true random source make a really good thermometer?

metaschima June 18, 2021 7:48 AM

Can we just agree that any cryptographic primitive created in cooperation with a secret service agency cannot be trusted, because it is likely to have been intentionally weakened or backdoored ? Of course, this will not be discovered immediately as this and previous cases have shown, but it will be discovered eventually. The main goal of any secret service agency for as long as they have existed is espionage. How can you spy on people if you make strong crypto with no backdoor ? This cannot be. Crypto produced with their “cooperation” must be easy enough for the agency to break or it must be backdoored. Anyway, this is my opinion, but it is backed up by numerous past examples of this happening: the NSA with DUAL EC DBRG, NSA with DES/LUCIFER, the British secret service selling Lorenz machines after WWII that they could break using Colossus, NSA weakening Hagelin CX-52, NSA and Crypto AG, and others like the one in this article.

Clive Robinson June 18, 2021 9:06 AM

@ Bob Paddock,

“This random noise source is a ring oscillator that is sensitive to random noise (temperature variations, voltage variations, cross-talk and other random noise) within the device in which the TRNG is used. “

That is mainly “chaos” at best and very frequently very determanistic.

As I mentioned yesterday on abother thread there is the principle of the “Loose Locked Oscillator” and it’s something that is,a right royal pain when you are developing TRNG’s.

Ring oscillators are very very very sensitive to the effects that make “loose Locked Oscillators”.

Briefly Ring Oscillators are “phase shift” oscillators where the phase shift is caused by either direct invertion or the transmission charecteristics of the gate.

In microcontrolers the ring oscilator gets it’s phase delay from either a very high Q tuned circuit (XTAL) or charging and discharging one or more RC lowpass circuits.

The the problem is that the gates be they inverters or buffers are very very high gain analog amplifiers designed for “smashing into the rails” as such their stability around the switching or mid point is quite unstable and they are very sensitive to “rail noise” which gets shifted by the bucket load around logic IC internal power traces. Worse inba ring oscillator you get “metastibility” issues that lead to tempory and soft lock ups which means they stop oscillating so stop producing output. So when combined with a determanistic algorithm it ceases to be a TRNG but a PRNG that might or might not –probably not– be CS compliant.

In order to stop you observing just how crappy / usless ring oscillators are as TRNGs usually they hide them behind a Hash or other CS algorithm.

But you don’t have to stop a ring oscillator to make it usless justvsquirt an EM signal at it that either is close to the oscillators frequency or is a much higher frequency AM modulated close to the oscillators frequency. The result is the ring oscillator synchronizes to the signal. Just like the stereo pilot signal in an Analog FM broadcast or the Colour Chroma signal burst in an old analog Colour TV signal like NTSC or PAL.

But it gets more fun…

Usually because the individual gates in a eing oscillator provide so little time delay thus phase shift they will use more than 20 gates in each ring oscillator, making the susceptability to external signals very high, they also use multiple very high frequency ring oscillators and combine them in a D-Type latch or similar.

When you look “close in” at the latch outpit on an oscilloscope the trace does look very random. But it is nothing like random what so ever. If you open out the timebase you will see that the transitions change frequency and bunch up and spread a part very very reliably. If you “low pass” the output you get a very pure looking sinewave. Because rather than being random it’s an “over sampled difference frequency” between the two ring oscillator frequencies…

As you probably know sinewaves are very very easy to find in noise you just use a very narrow bandwidth filter to remove the bulk of the noise. There are DSP algorithms that will not just find a sinewave artifact but lock onto it and follow it as a Digital Phase Locked Loop (D-PLL) without going into details the sinewave will appear through the use of an encryption algorithm just like that frequently seen image of Linux’s Tux penguin mascot. Only a lot lot more easily synchronised. Worse unlike the Tux image, it does not matter how wide the encryption algorithm is in bits… Because it “samples the signal” and that means you will see either the actual sinewave at it’s natural frequency or as either a harmonic or sub-harnonic. Which evervit is does not matter because a D-PLL will lock onto it and that will discipline an “on frequency” oscillator even if it’s “software only” like a Digital Numeric Oscillator that can be steped up or down in the tiniest of frequency increments you want. You can also make it very frequency agile by using it in the equivalent of a “fractional N” or “Sigma Delta” D-PLL.

The result an attacker gets to know rxactly what your supppsed TRNG is doing… Thus you are lulled into a very very false sense of security by say Intels TRNG which is again ring oscilators but hiding behind a hash algorithm…

So if I see “ring oscillator” in a design depending on why I’m looking at it I either say in a polite way “The doors over there”, “You should think of a better way to do this”, or “At the very least run your crypto not in ECB mode but a nonlinear chaining mode with frequent IV change and restart”.

But if you look at our hosts TRNG the “entropy pools” in effect do the chaining with frequent IV change and restart, but with effectively a very very very long chain (it kind of makes it a lagged Fibonacci gennerator with multiple taps, which is still linear though).

echo June 18, 2021 9:18 AM

@MarkH

An interesting difference between DES and GEA-1, is the degree of concealment.

The downgrading of DES was done with some smokescreen about “parity bits,” which didn’t deceive academic cryptographers. They knew it had been deliberately weakened, and some yelled about it in public.

By contrast, the weakening of the stream cipher was done subtly, and I suppose required some serious expertise to carry out.

Although the technique probably hasn’t been studied as much as it deserves, there’s good reason to expect that nonlinear combinations of LFSR outputs can be used to make a stream cipher which is highly resistant to cryptanalysis. At face value, GEA-1 looked solid.

This kind of tricky “backdooring” is what keeps some folks awake at night, wondering what and whom they can trust.

Well yes that is quite obvious from a straight reading of the material. I’m more interested not in what is said but what is not said.

@metaschema

Can we just agree that any cryptographic primitive created in cooperation with a secret service agency cannot be trusted, because it is likely to have been intentionally weakened or backdoored ? Of course, this will not be discovered immediately as this and previous cases have shown, but it will be discovered eventually. The main goal of any secret service agency for as long as they have existed is espionage. How can you spy on people if you make strong crypto with no backdoor ? This cannot be. Crypto produced with their “cooperation” must be easy enough for the agency to break or it must be backdoored. Anyway, this is my opinion, but it is backed up by numerous past examples of this happening: the NSA with DUAL EC DBRG, NSA with DES/LUCIFER, the British secret service selling Lorenz machines after WWII that they could break using Colossus, NSA weakening Hagelin CX-52, NSA and Crypto AG, and others like the one in this article.

This requires a range of none technology skills to deploy effectively. Exploiting political relationships, psychology and sociology, organisations, policy and legal structures, focus of interest, bikeshedding and fashion, and so on.

Clive Robinson June 18, 2021 9:45 AM

@ metaschima,

Can we just agree that any cryptographic primitive created in cooperation with a secret service agency cannot be trusted

Just one problem, if you did not fully create the crypto system, how do you actualy know if a Secret Service agency was involved or not?

In fact how can any company with employees designing crypto systems or parts there of, know that their employees do not have some “relationship” be it voluntary, coerced, or just hood winked by a security service agency?

Remember RSA and the Dual Eliptic Curve random generator? Or Jupiter Networks with similar odd code turning up in their shipped products.

Heck I’ve sat in on International Standards meetings and watched “technical representatives” comming up with “Health and Safety” excuses and similar to backdoor digital communications… You know exactly what they are upto, but if you challange them you get the ewuivalent of “Think of the Children” FUD comming at you from the Extended Five Eyes technical representatives and they just keep at it untill you are too exhausted ostracized… The idiot from the NSA that so badly tried to “finess” a standard caused such a hate that they got turned on and eventually “outed” and that caused all sorts of embarrassment for the standards body…

So if it’s not entirely your own work from the roll of a dice (entropy source) upwards, it’s best to assume it’s not secure.

It’s basically what they call a “root of trust issue”, if you can not establish a real root of trust then it’s probably “game over” for your security. But with law enforcment getting in the game it’s going to also be “Go straight to jail, do not pass go do not collect $200” regardless of what you might or might not have done. They can judicially squeeze you via the politically controled “Special Administrative Measures” or equivalent till you plead guilty to something. They get promotion brownie points and you are left bankrupt and unemployable such is the modern day version of “Rights Stripping”.

It’s kind of the new “Power Politics” and unless you have billions off shore, you don’t have the power if you behave legally…

MarkH June 18, 2021 11:13 AM

@Clive:

By the usual definition, each cipher defines a fully deterministic function (or often, a pair of such functions).

To describe a deterministic process as “random” or “true random” must surely lead to misunderstanding by some readers.

Randomness is such a crucial topic, and at the same time so baffling and mystifying.

Consider the legion of crypto noobs who read about “perfect secrecy” and then invent easily broken stream ciphers …

On topics where misunderstanding is so prevalent, it’s best to use terminology (and language more generally) with great care.

Clive Robinson June 18, 2021 12:37 PM

@ MarkH,

To describe a deterministic process as “random” or “true random” must surely lead to misunderstanding by some readers.

Unfortunately they both exist in a cipher system.

If you have say AES in ECB many might consider it “determanistic”. But the key should be “truely randomly selected” and even though it does not look random to most people even text “plaintext” contains truley random elements otherwise no new/useful information could be sent.

The same is true for an OTP, where the KeyMat should be truely random on every bit, but it still uses a determanistic process for it’s mixer function, and has a fair degree of localised determancy in it’s text based plaintext.

So the confusion does come “built in” in all crypto systems as at the end of the day it has to be at least truely random and determanistic.

Then of course there are “the shades inbetween” where determanistic becomes random to observation or pesudorandom. Also there is the notion of “chaos” to consider as well.

It’s all kind of messy and it’s not unusual to expect people to know just what sort of random you are talking about from the truly determanistic but too complex for an observer to tell through to the truly random where from time to time an observer might think they see determinism, simply because things are unbounded.

I hope that’s clear 😉

Tatütata June 18, 2021 1:06 PM

Do you even need to go through the trouble of inserting some known plaintext?

Checking my references (I worked with GSM standards a while back), page flip, flip, flippitty flip…

For GPRS, the SGSN sends a 128 bit random number RAND, which is received at the MS and encrypted on the SIM card using the A8 algorithm with the 128 bit subscriber identification key Ki. The session key Kc is derived from a (smallish) subset of the output block bits. The exact A8 algorithm used is left at the discretion of the operator (for those that actually applied encryption), but the various COMP128 options commonly offered were known to have weaknesses.

https://arxiv.org/pdf/1002.3175.pdf

2) Flaws in implementation of A3/A8 algorithms: Although the GSM architecture allows operator to choose any algorithm for A3 and A8, many operators used COMP128 (or COMP128-1) that was secretly developed by the GSM association. The structure of COMP128 was finally discovered by reverse engineering and some revealed documentations, and many security flaws were subsequently discovered. In addition to the fact that COMP128 makes revealing Ki possible especially when specific challenges are introduced, it deliberately sets ten rightmost bits of Kc equal to zero that makes the deployed cryptographic algorithms 1024 times weaker and more vulnerable, due to the decreased keyspace. Some GSM network operators tried another new algorithm for the A3/A8, called COMP128-2. COMP128-2 was also secretly designed and inherited the problem of decreased keyspace. Despite of such important problem, no other problems are reported so far. However, we can prospect for new discovered vulnerabilities in the future as it is secretly designed. An improved version of COMP128-2, called COM128-3, is also proposed that generates 64 bits of session key and resolves the problem of decreased keyspace.

So this new discovery would seem almost redundant. Perhaps there is a way to combine both vulnerabilities into one even more effective attack?

And while we’re at it, what prevents SIM manufacturers from introducing properties reducing the effective entropy of the delivered SIMs, ranging from the subtle mathematical relationship in the RNG, to the outright brazen setting of Ki to cryptofunction(TLA_provided_key,IMSI)?

MarkH June 18, 2021 1:07 PM

@Clive:

The key is supposed to be random.

The plaintext has some entropy … otherwise there’s no point in sending it!

But the cipher — while necessarily having certain pseudo-random properties — is not random at all.

I believe I understood what you meant by those words; I’m fairly certain others will not.

SpaceLifeForm June 18, 2021 2:30 PM

@ MarkH, Clive

Good points about the ‘language’.

My questions were rhetorical. I should have used ‘trust’ instead of ‘proof’.

As they say, you can’t prove a negative.

The plaintext has some entropy … otherwise there’s no point in sending it!

See zoom call. Non-changing plaintext (the wall behind the person).

See silent time during a voice call.

A smart app would not transmit.

A dumb app can be exploited to leak the key over time, and the attacker can then go back and decrypt the already recorded data.

I do not trust stream ciphers to not leak if there is sufficient repetitive plaintext data.

Keeping the duration of the stream short may be the best defense.

Clive Robinson June 18, 2021 5:42 PM

@ Tatütata,

worked with GSM standards a while back), page flip, flip, flippitty flip…

The last time I was involved with the design of a GSM phone in earnest, the stack of A4 sheets was about four foot high and when in even very thin plastic binders there were so,many we had to make a new one just to hold an index of indexes…

And there were bugs in the documentation and we found one by bringing a UK network down…

As you are nodoubt aware the phone is supposed to re-register with the network every so often. The specification got changed because of certain issues. Put simply the phone was supposed to re-register in half the previous time period if the last re-register failed… But the spec dod nor give a minimum time… The UK network used a certain well known European Switch provider and the way it re-regisyered phones was not the new improved way…

So even though the phone was registered with the network it ended up halving the rime period untill it effectively monopolised the control channel… Fun days not…

Back then the backend systems of network providers were “Unix” and that was in effect “big iron” compared to the High end PC systems we did CAD/CAM etc on…

Then somebody “fingered me to QC” as being “The man who spoke guru”, as the machine I was using –that was actually my own– ran AT&T Sys Vr4 with “DOS-Merge” that you could run around 6-8 copies simultaneously via a multi-port Serial Card that the individual “In Circuit Debuger/Developer Emulator (ICE) hardware for the CPU’s ranwith. So I suddenly “owned the problem” with dealing with the UK network people.

If that were not bad enough “The big-big boss” got told and it turned out his daughter was in her First year at Uni and “Could I just help a little with her Unix and maths stuff”…

And so on giving me less and less time to do my “real work” that my boss kept piling up on me because I got things done…

I thus learned the true meaning of,

“If you want something done quickly, but well, give it to a busy person”.

Clive Robinson June 18, 2021 6:39 PM

@ SpaceLifeForm, MarkH,

I do not trust stream ciphers to not leak if there is sufficient repetitive plaintext data.

Yup… Fixed screen startup logos are almost perfect “Known plaintext”. As you might know the more linear that “mapping function” is the less bits are needed. In theory twice the effective key “counter” width is all that’s needed to “knock it dead”.

But many who design Lagged Fibonacci type systems like Linear Feedback Shift Registers appear to forget something… That “clock arithmatic” or “Chinese Number” mod rules apply

If you look at the GEA-1 algorithm then the GEA-2 algorithm, GEA-2 just tacks another LFSR on in parallel…

Now LFSR’s have an interesting property, if you take the output of an LFSR and delay it through another Shift Register and add the two together via an XOR gate or equivalent, you end up with the same output sequence as the LFSR just shifted further in sequence.

So adding an extra LFSR to an existing system with linear or near linear combining does not get you any extra counter bit width equivalent it is just an expensive way of adding a counter offset.

Now ask yourself a question about what if you use two LFSR’s that differe only marginally in say one “tap” position?

It turns out you in effect delay a sub-sequence of the common LSFR circuits.

It’s what you would expect when you get sufficiently “down and dirty” with Walsh / Sequency circuits where all you are realy doing is adding individual bit sequences together via XOR gates…

Thus unless you take care to ensure your LSFR’s do not have any commonaliry in length and taps you effectively “loose counter bit width” or “Key bit length”.

Now most hardware engineers like “commonality” in design, they love repeating the same circuit block over and over. Especially if it has lots of tap points…

Take another look at those LFSR in GEA-1/2 they are not exactly sparse in taps…

So when you design a “common hardware block” what a lot of hardware engineers would do is use the fact that an XOR gate is a “Controled NOT Gate” that is if you fix one input to “high” the result is the XOR is a NOT gate or Inverter, if you fix it low then it is a non-inverting buffer… So by adding an AND gate in series with one of the XOR inputs you have a control signal that makes the resulting combined gate either a non-inverting buffer or an XOR gate. Drive those control signals from a software driven register you can make just one circuit block do any of the LFSR functions in the Key Stream generator… So you could make the actual LFSR length small and have the same taps with the result you could end up only generating a slow squarewave as the key stream if you so wished…

If you actually made those registers that hold the control settings a shift register, to an outside observer it would look like you were shifting in a very long key length… If you just clocked in “random data” you would get what looks like a valid “key-stream” at the output. It’s only if you get the right combination does the key stream get very very weak so your chance of finding one “randomly” is likewise very very small… But if you know the circuit hidden away inside that “secure chip” then you can arbitarily set the strength of the key-stream as you see fit…

So yes there are all sorts of hidden gotchers for the unwary. But lets just say you were an independent crypto system manufacturer supplying all sorts of countries. You could use identical hardware for customers in the A list that got very strong key-stream generation, say 80-bit equivalent for B-list customers and 40bit or less for C-list customers.

Sound familiar? Well several European Based crypto hardware suppliers effectively did just that… And one of them got mentioned on this blog today by someone asking about alleged TRNG’s embeded in chips…

MarkH June 19, 2021 12:41 AM

@SpaceLifeForm, Clive:

The only thing that repetitive, patterned or otherwise predictable plaintext can do to a stream cipher, is make it easier to infer the keystream.

That doesn’t matter at all for a strong stream cipher; in fact, designers of such ciphers must assume that an attacker has access to the raw keystream. The design goal is to make it impossible (or at least, more costly than anyone can afford) to deduce key or initialization vector bits from the keystream.

If the cipher is strong, repetitive plaintext won’t leak any secret.

If the cipher is weak, plaintext entropy may delay cryptanalysis, but cannot be trusted to prevent it.

========================

As Clive observed, it’s best practice to use strong block ciphers if you can.

Stream ciphers may be useful where computing resources are very limited. I’ve looked into coding crypto on microcontrollers with very tiny RAM capacity which would prohibit strong block ciphers.

========================

Readers not already familiar with it, may enjoy learning about the “shrinking generator” of Coppersmith, Krawczyk and Mansour (1994).

It’s wonderfully small and simple, made of two LFSRs. One is the source for keystream bits; the other functions as a “gate”, with raw outputs of the first either being used in the keystream, or discarded.

The 1994 paper derives or proves many interesting properties of that construction.

When the shrinking (or more formally, decimating) generator is used the paper recommends, the LFSR polynomials (tap sequences) are effectively the key, so a keystream is fully determined by the pair of polynomials, and a pair of initialization vectors.

As far as I have been able to discover, no cryptanalytic attack has been found against the shrinking generator with secret polynomials: the best known attack remains exhaustive search, which with a total of 128 LFSR bits will remain beyond feasibility for a long time to come.

The shrinking generator also has a long period, allowing a large volume of data to be encrypted using a single key.

SpaceLifeForm June 19, 2021 1:31 AM

@ MarkH, Clive

If the cipher is weak, plaintext entropy may delay cryptanalysis, but cannot be trusted to prevent it.

Exactly. This is what Downgrade Attacks are all about.

Clive Robinson June 19, 2021 8:14 AM

@ MarkH, SpaceLifeForm,

As far as I have been able to discover, no cryptanalytic attack has been found against the shrinking generator with secret polynomials

It is easily proved that there is a worse case condition where it could be broken in as little as 2W bits of the “generator” LFSR where W is the size of the “effective counter” in bits.

Basically there is known attack on an LFSR that says all you need is 2W bits out of the LFSR.

To get that out of shrinking generator, all that needs to happen is the run length on the gating/decimating generator to be 2W or longer. As I demonstrated in a comment further up you can get a maximum run length of 3W-2 bits out of an LFSR…

So it’s more than possible for some one who controls the two LSFR’s to compleatly bork the shrinking generator with out the user being aware of it…

Dillweed June 19, 2021 8:51 AM

@ALL @Everybody

In this physical plain we live in called earth and in the theoretical world is there truly anything that one entity ( by entity ) for discussion purposes, I mean anyone government that could observe a event and measure its randomness and thus generate a key that no one else could observe and measure, thus create a unbreakable stream.

For instance, if one government had the only Hadron Collider anywhere and they were able to measure the nuances from generating sub atom particles ,which may create a unique random event and observable sounds,pressure etc.. would that be like a one time key. Or if a certain country had the best and only technology to go into space and observe all the random events,recording such to generate unique data, again could that be impetus to make a one of a kind,random secure crypto stream.

OR

Is out existence in this world and universe non unique and everything is observable and measurable by most.

manhack June 19, 2021 9:06 AM

The only official (and updated) website for the SOG-IS is sogis.eu, as sogis[.]org, which seems to have been highjacked in 2019 (see archive.org), contains links to gagdgets-reviews[.]com in place of the previous “Agreement”, FYI.

MarkH June 19, 2021 9:25 AM

@SpaceLifeForm, Clive:

“Downgrade Attacks”

“some one who controls the two LSFR’s”

Woe to them, who have adversaries in control of their crypto

MarkH June 19, 2021 11:05 AM

@Clive:

I’m trying to work out the attack concept on the Shrinking Generator, which you proposed above.

If the length of the source LFSR (called A in the original paper) is L bits, and an attacker knows that the selection LFSR (S in the paper) will output at least 2L consecutive “yes” bits — at a known point in the keystream — then Berlekamp-Massey can be used to fully determine the A sequence.

Is that what you have in mind?

If the attacker expects S to output at least 2L consecutive “yes” bits, but doesn’t know where they will occur, do you think A could be reconstructed on that basis? If so, how?

If an attacker can predict the S sequence with a significant degree of confidence, isn’t the generator already broken by the authors’ definition?

SpaceLifeForm June 19, 2021 5:20 PM

@ MarkH, Clive

Woe to them, who have adversaries in control of their crypto

So, basically everyone.

Just because you are not a high value target does not mean that an adversary is not watching.

Who is watching the watchers?

MarkH June 19, 2021 6:36 PM

@SpaceLifeForm:

For generations, that adversaries watch your crypto has been a fundamental premise of cryptographic science. Even if the enemy doesn’t (yet) know the system, such knowledge must be assumed.

Adversaries watching your crypto is very different from adversaries controlling your crypto. If things have deteriorated that far, you might just as well “send in the clear.”

When I design embedded systems or PC applications, I’m the one who decides the protocols and techniques, not potential attackers.

I considered the Shrinking Generator at one time for a small microcontroller application, for which most other ciphers were not feasible. In the event, the project didn’t get the green light.

At the time, I was a little worried that the Shrinking Generator hadn’t received enough attention from academic researchers to engender high confidence in its soundness.

By now, the Shrinking Generator has gone 27 years without being broken.

Clive Robinson June 19, 2021 8:24 PM

@ MarkH,

Woe to them, who have adversaries in control of their crypto

That is in effect what happens with all crypto and mobile phones currently… The crypto is not controled by the user deliberately by design just the illusion they have control is given if even that.

So the usage model is,

1, The first party calls the second party.
2, Their call security key is determined by a third party.
3, With a fourth party covertly trying to eavesdrop the call, but without needing contact with the third party to get the keyMat etc.
4, However, the fourth party designed the hardware used in the key stream generator.

You can fill in GCHQ/NSA or equivalent for forth party, hiding behind ETSI/GSM.

The Third party is the mobile phone “service provider”

And the first and second parties are hubby and wife Joe and Jo Average.

That is the reality of mobile phone usage…

Which is the model for my discussion of the shrinking generator being potentially borked by the Forth Party with a “Golden back door”.

Which brings us onto you next post,

I’m trying to work out the attack concept on the Shrinking Generator, which you proposed above.

Which brings us onto,

If the length of the source LFSR (called A in the original paper) is L bits, and an attacker knows that the selection LFSR (S in the paper) will output at least 2L consecutive “yes” bits — at a known point in the keystream — then Berlekamp-Massey can be used to fully determine the A sequence.

Is that what you have in mind?

It was not my intention to think of it as a concrete exploit, it was to show that it was possible to bork the system if you had designed it and thus can be able to “backdoor” it. Which the numbers indicate it is possible to do.

So the obvious first idea of a solution to stop this is to ensure that the design can not give 2L consecutive bits out from the “source LSFR”. Thus you would need to ensure you can not have a sufficient run length of ‘yes’ bits from the ‘selection LFSR’. The easy way to do it would be to have a maximum run length of 3S-2 bits being less than 2L bits. With the LFSRs being a power of two, you end up with the ‘selection LFSR S’ being half the size of the the ‘source LFSR L’.

But is that a sufficient condition to stop the potential for a back door? Unfortunately not.

Whilst it’s said that 2L consecutive digits are needed for the Berlekamp-Massey attack, that is not quite true. Due to the fact you can add two streams from one LSFR configuration and get a shifted version of the stream in theory it can be 2L “effectively consecutive” bits… How you might achieve this as the designer does not realy matter.

That is the reasoning shows that the shrinking generator can be backdoored with a series of shorter run lengths.

So much for showing backdooring is possible you raise the question how might a designer of the generator go about this.

Well the first thing to consider are some of the implications of the usage model.

That is,

1, The first three parties can not look in the “black box” and see design the fourth party made. Which gives the forth party a lot of flexibility.

2, In normall operation it’s assumed only the third party can see the key.

Which brings us to your two questions of,

If the attacker expects S to output at least 2L consecutive “yes” bits, but doesn’t know where they will occur, do you think A could be reconstructed on that basis? If so, how?

Lets just assume for the moment the fourth party designed the system to put out 2L “effectively consecutive” “yes bits” somewhere near the start or other offset but not always in precisely the same place.

The attacker knows the start point can only be one of a very few positions. Thus can find it in some manner (assume just a small range over say the first thousand bits).

In a practical implementation does this have to be just a, brut force search?

No of course not. The very nature of the way the shrinkage generator works says there is an explicit time based side channel inherant in the design.

That is to ensure that there is always “key stream bits” at the output bit rate the generator needs to be run into a buffer system which can hold as large a number of bits as required for the longest run of “no bits” from the “selection LFSR S”.

We already know that the maximum run length is 3S-2 and thus the buffer most likely would be increased to some power of 2 larger than this say 4S. Further the generator would have to run faster to “stay ahead” and prevent buffer starvation. Thus the generator is probably going to be clocked at least twice as fast as the required output bit rate. Which means the generator will stop when the buffer is full and start again when it reaches a point that relates to always holding sufficient bits to cover a 3S-2 run length of “no bits”.

The electronics for the buffer would supply a complex power and time signiture. It would not be too difficult to arrange for a time based covert side channel to modulate the edges of the pulse train in some way. Which would also be meaningfull to the start position of the 2L effective consecutive bits. The specifics are not of importance, only the knowledge that such modulation of the timing of the pulses would be transparently sent through the rest of the system thus be visable to the attacking fourth party on the “Air Interface”.

I hope that gives you ideas as to how you might go about “backdooring” the design.

SpaceLifeForm June 19, 2021 11:33 PM

@ MarkH, Clive

Language Busted again. s/watch/control/

Just because you are not a high value target does not mean that an adversary is not controlling.

Via Downgrade Attacks. Who is controlling the controllers?

Well, unless you roll your own crypto (be very careful), then the controllers are not you. As Clive clearly noted.

The problem as I see it, is that there are adversaries that can get the MITM, and spy upon anyone they want. Obviously, they are not going to be looking at everyone because of the data volume. Yet, they can do it without a warrant. They can go fishing instead of having any probable cause. Which then can allow them to collect dirt on their political enemies. They can collect business intel, and manipulate the markets. They can collect street buzz, and then feed the media misinformation to get ahead of the story, and confuse the public at large.

MarkH June 20, 2021 3:36 AM

@Clive:

I at last figured out that we are somewhat at cross purposes.

My intention was to explain that for a strong stream cipher, security is completely independent of plaintext properties; and that stream ciphers can be made strong, apparently even quite simple ones.

I offered the Shrinking Generator as an example of highly effective non-linear combination of two linear elements; I have not endorsed it for any particular application, telephony or otherwise.

And I most certainly did not have in mind that any particular simple + efficient stream cipher will be resistant to intentional weakening by the designer.

But now that you’ve got me thinking on it, I assert that the Shrinking Generator is hard to weaken (in the sense of conventional cryptanalysis) without that being easily detectable.

Supposing a situation like that of GEA-1 with the public promulgation of a standard, you can’t really weaken the cipher by covert hardware tricks and the like, because numerous independent firms will be implementing the standard.

To use the “register A exposure” attack you outlined above, it’s necessary to control the polynomial and IV of register S, which both should be part of the key. Unless the attacker controls the keys (hardly a reasonable threat model!), or includes some needless and suspicious manipulation of key bits in the promulgated standard, how could such an attack be possible?

Since the Dual EC scandal, the crypto community is alert to subversion.

The genius of the GEA-1 subversion is that the backdoor was subtle enough that people reviewing the standard could miss it.

I just don’t see how a subverted standard could be made for the Shrinking Generator, so that the defect would not be obvious to anybody who read the original paper.

========================

I was not addressing side-channel attacks, which are another kettle of fish. Obviously, the Shrinking Generator is especially prone to leakage, and would require minute and laborious attention to signature masking. That’s a good reason NOT to use it for a variety of applications.

My intuition is that any stream cipher with low computational cost will be highly vulnerable to side-channel attacks, though not perhaps as badly as the Shrinking Generator.

Anyway, side-channel is tangential to the GEA-1 saga, which is about a data-at-rest attack requiring no access to gadgets performing the crypto.

Clive Robinson June 20, 2021 5:25 AM

@ SpaceLifeForm,

Who is watching the watchers?

Best answered with another question,

“Have you ever seen a circular firing squad, taking a shot in the dark?”…

Who? June 20, 2021 6:34 AM

From VICE’s article:

Thus, they speculate that a weakness was intentionally put into the algorithm. After the paper was published, the group that designed the algorithm confirmed this was the case.

I am not worried at all about this discovery. I do not care about Wi-Fi encryption schemes either, as they are supposedly protecting traffic on the first ten up to twenty meters of its journey only.

Every one should consider WWAN and Wi-Fi technologies as plain communication channels. Encryption must be provided from trusted open-source platforms, and must be based on public ⸺as in well-known, and audited by cryptography experts⸺ algorithms only.

If you do not trust on your encryption, you must assume your network traffic is public to at least someone.

Who? June 20, 2021 7:04 AM

This deliberate weakness on the GEA-1 and GEA-2 encryption algorithms is just another consequence of the crypto war during Clinton’s administration.

Clive Robinson June 20, 2021 10:23 AM

@ MarkH,

My intention was to explain that for a strong stream cipher, security is completely independent of plaintext properties;

Oh if only that were true of determanistic ciphers.

As I’ve indicated in the recent past a stream cipher can be considerd to consist of three “independent” parts,

1, A programable counter.
2, A determanistic mapping function.
3, A determanistic mixing function.

Obviously the only secret in the system where “random” can happen is the value you load into the counter, from that point everything is then fully predictable apart from the plaintext into the mixing function which has a small amount of “random”.

Worse the mixing function is usually simple and fully linear, and fully reversable with little or no complexity.

Thus pretty much everything to do with the security of the system relies on,

1, The determanistic mapping function.
2, Absolute following of the rules of using the streamcipher.

Some of the more obvious rules that if broken will help in cryptanalysis are,

2.1 No keystream reuse.
2.2 No plaintext resent under a different keystream.
2.3 No modification to resent plaintext.
2.4 No “known plaintext”.

Break any of them only once then security goes right out the window.

There are two basic ways to reduce these rule issues,

1, Make the mixing function very much more complex.
2, Use the cipher in a chaining feed forward/back mode with an IV that only ever gets used once.

There is also,

3, Add another stage either just before or just after the determanistic function to mix in further “random”.

This extra random can be as simple as a true random “nonce” used as “whitening” across the full counter width prior to the determanistic function. Or another changing random stream on the output of the determanistic function prior to the mixing function, or to use as a changing whitening to the input of the determanistic function.

These if used carefully break most of the standard “attacks” against the determanistic mixer function

For instance back in the late 1990’s when ARC4 was “fresh” I added a Blum Blum Schuab (BBS) generator that got added to the Iptr as whitening that changed every predetermind number of counts.

I also added an “additive” version of a word wide LFSR that Knuth documented from private corespondance with Mitchel and Moore (see vol2 the art of computer programing). This got used as a further offset value to pick a different value from the ARC4 Sarray than that was output from the swap process[1].

However as with all such basic building blocks there is an urge to use them for other things[2]…

The result of such experimenting is you find that you realy need to add some of the features that are common to Block Ciphers to stream ciphers to make them more tollerant of user errors that would otherwise prove catastrophic to determanisticaly generated non OTP keystreams.

Another thing you find is it’s very difficult to “backdoor” a block cipher as there is no usable redundancy within most block ciphers. The same is very far from true of all determanistic stream generators that use LFSRs.

Which is why I have a prefrence for “card shuffeling” algorithms for key stream generators as they are easier to obviously make “one way” easily and quickly.

Oh one fun little trick, feed the plaintext into a multi section Digital low pass filter to get an average over say the last ten or twenty chars and add this as an offset into the mixer function or Sarray value selection in the output of a card shuffeling algorithm.

That is “Have modes for plaintext” as well as “Modes for cipher algorithm use”.

[1] This was done to make a DRNG simulation that was part way between a CS-DRNG and a TRNG for use in a multi tasking OS for high end embeded systems. The addition of the BBS was to simulate adding a slow TRNG input to evolve the Sarray. The Mitchel-Moore generator so that several people could use the same Fast generator without getting the same number sequence.

[2] One such was to produce blocks of “random round keys” in a mixer function that used a multiple round Feistel structure to ensure the plaintext did have a very noticable effect on the key stream generator output.

MarkH June 20, 2021 11:47 AM

@Clive:

I’m baffled by 3 of the 4 stream cipher rules asserted above …

2.1 No keystream reuse.

2.2 No plaintext resent under a different keystream.

2.3 No modification to resent plaintext.

2.4 No “known plaintext”.

I get rule 2.1, which is analogous to the absolute stricture against re-use of one time pads.

I don’t see how any of the other rules applies to a strong stream cipher.

The keystream generator of a strong stream cipher is a cryptographically secure deterministic pseudo-random bit generator. For example, the BBS generator — of which you made such clever use in the example you gave above — could in principle be used for a stream cipher.

BBS, though fully deterministic, is cryptographically strong because there is no known method for predicting the next output bit with better then 50% accuracy unless you know the secrets; and inferring the secrets is believed to require an infeasibly costly computation.

Analogously, with the Shrinking Generator no way is known to predict the next bit without knowing the polynomials and states; and there is no known method to infer those values without an infeasibly costly computation.

For both generators, even knowledge of a very long bit sequence cannot be used to predict the next bit, or even “postdict” the bit which preceded the sequence.

For a stream cipher with such properties, how can a known plaintext be of any possible use to an attacker?

I don’t get it.

Tatütata June 21, 2021 11:39 AM

@MarkH

Re. #3 : see Differential Cryptanalysis
https://en.wikipedia.org/wiki/Differential_cryptanalysis

I’ve been mulling over years for a long time about a certain procedure.

Password hashes are salted to prevent pre-computed (rainbow) table attacks. In that case, the salt is stored in clear alongside with the hash.

Why couldn’t something similar be implemented with stream ciphers against known plaintext attacks?

The solution would be to add an extra random scrambler, not necessarily one that is cryptographically secure?

Stream encryption is performed at an OSI layer above the one synchronising source and receiver.

What if at the initiation of a stream a fixed size random prefix were to be sent, and used as a seed for a secondary scrambling code? A quantity derived from that block, e.g. through a hash, could define a second junk prefix, preventing a third party from predicting exactly where the payload begins.

Plaintext attacks would become much more difficult, as even if one was aware of the ultimate plaintext, they would still have to guess the independent random prefix in addition to the session key.

This scrambler would be implemented at a superior protocol layers (presentation?), together with compression.

Is this done in any form?

MarkH June 21, 2021 3:58 PM

@Tatütata:

For a standard stream cipher the keystream is independent of the plaintext, so differential cryptanalysis is impossible from known or chosen plaintexts.

Here’s an example of how differential cryptanalysis can be applied to a stream cipher:

Suppose that changing one bit of the key (or IV) yields a keystream which relates to the original (before the bit-toggle) keystream according to some recognizable pattern which is largely independent of the other bits.

If such a pattern is found, then comparison of keystreams can potentially yield clues to the relationship between the keys and/or IVs.

This kind of analysis can obviously be subtle and tricky, but if the patterns are systematic, it can give rise to a practical cryptanalytic attack.

If any cipher systematically exhibits such differential patterns, it is vulnerable.

Using the plaintext to obscure the keystream of a weak cipher can make an attacker’s work harder, but it’s really putting the cart in front of the horse. The cipher is supposed to protect the plaintext, NOT the other way round! [1]

========================

The mechanism you propose is new to me — Clive has so much experience, I’d be interested to know whether he has seen it.

For practical reasons, it would presumably be limited to some maximum prefix size of L bits. I suppose that the variable shift of the actual message payload would increase the attacker’s workload by a factor of not more than L (and depending on specifics, perhaps much less than L), which might not be enough protection if the cipher is weak.

[1] Among us grizzled old-timers who witnessed semiconductors supplanting vacuum tube technology, it was said that “the transistors blow out to protect the circuit breakers and fuses.”

Clive Robinson June 22, 2021 12:37 PM

@ MarkH,

Appologies for not responding faster, as you might have noticed things have been a bit hectic over the past few days.

Any way back to some security thinking of the purer kind again.

The keystream generator of a strong stream cipher is a cryptographically secure deterministic pseudo-random bit generator. For example, the BBS generator could in principle be used for a stream cipher.

First define an old truism of security,

“What is in theory, rarely is in practice, and never is in the first revision.”

That was certainly true of AES and I’ve yet to see any stream cipher make it from the designers note book to a secure implementation without a few trips around the build bench, and that includes OTP based systems.

So “strong” means little in practice and “in principle” has real teeth to rip the flesh from your bones if you get it wrong.

Our host has written a couple of books on this and you can tell there are some scalds where the water got a little to warm at some point on the learning curve.

But back first to a simple case, to do a little stage setting. The “linear weakness” that can bite the unwary.

“If you combine the outputs of two identically tapped lagged Fibonacci generators such as LFSRs, all you get is a delayed version of the stream.”

Does not sound like much to most people but…

It has real implications to using a stream generator in that if you send a message 1 against stream offset A it will be secure. If you send message 2 against stream offset B provided none of stream A is part of stream B then it will be secure.

But… What of message 3 against offset C?

Well ensuring A and C have no overlap as well as ensuring B and C have no overlap is insufficient for security. You also have to ensure the resulting stream from A+B has no overlap with C

But what of message 4 against offset D?

Well D needs to have no overlap with A,B,C, also A+B,A+C,B+C, with each message sent the problem gets worse.

Even if the stream is nonlinear similar issues can arise for various reasons.

But just how linear is even a “strong stream cipher” that is even “a cryptographically secure” stream cipher. Well that depends…

As I’ve previously noted an LFSR can be modeled by a counter that drives a mapping function the output of which would be syep for step identical with the real LFSR.

The same is true for any stream cipher including what we consider the “strongest” and most “cryptographically secure” a practical One Time Pad.

So the model can be considered universal for stream ciphers.

What we do know is counters are not just highly linear they are in many ways entirely predictable. Therefore the implication is all the security when a stream cipher is in use is defined by two things,

1, The current value of the counter.
2, The non linear complexity of the mapping function.

We know from the One Time Pad that even though every bit of the counter is highly dependent on every other bit. The values effectively stored in each mapping position have to be fully independent of each other. Anything less does not have “Perfect Secrecy” and is in some way determanistic. Thus in theory at least is breakable.

At the other end where there is little or no nonlinearity we know that 2n bits are sufficient to break an LFSR. So the mapping has to be chosen carefully.

We also know to get OTP level security the mapping requires to be a memory device that even though highly linear in address decoding, uses storage elements to hold individual elements that are fully in dependent on each other. Such a memory device is going to quickly get expensive. That is at a minimum a 2^100 bit RAM is going to have 1.27x10E30 memory elements. Which is going to be over 5 thousand billion billion billion transistors just for the bit storage. Which is way way to much to put on a single chip or infact in the worlds largest refuse pit. At last count there has been less than 2.0×10E22 transistors made so far… Which is about one 4 billionth of that required (if the brain is functioning this evening)…

Thus the mapping has to be done in a radically different way and there is no way around introducing a fair amount of linearity…

Which gets you a lot closer to the linearity and determanistic nature of a counter or LFSR than you would realy like.

Are you with me so far?

SpaceLifeForm June 22, 2021 3:11 PM

@ MarkH, Clive, ALL

Adversaries watching your crypto is very different from adversaries controlling your crypto. If things have deteriorated that far, you might just as well “send in the clear.”

Disagree. You must always impose a COST on any potentional attacker. You can not just ‘give in’.

If you do not impose a COST, then you are low-hanging fruit.

Ask the orgs that have been ransom-wared if they imposed enough COST on the attackers.

You must impose a COST, and make the attackers actually work. Make the COST high enough so they move on.

Trim your low hanging branches, so they can not easily reach the low-hanging fruit.

They will move on to another tree (org).

SpaceLifeForm June 22, 2021 4:36 PM

@ MarkH, Clive, ALL

To make the analogy clearer:

There are three apple orchards.

The first orchard allows visitors (harvesters) for a low cost, to pick a peck or two of Apples for X COST. All of the low hanging fruit is picked nearly daily.

The second orchard allows harvestors to use cherry-pickers to gather a lot of Apples for a higher cost, but they get to pick the apples up high that are ripe. The cherry-pickers are secured when not in use.

The third orchard leaves the cherry-pickers out at night, with keys in ignition, with spot-lights and chainsaws.

Weather June 22, 2021 5:12 PM

@markh ,Clive ,slf
I can’t release it until a update on the hash, sha3 will likely fail.

Clive Robinson June 22, 2021 11:56 PM

@ MarkH, SpaceLifeForm, ALL

Adversaries watching your crypto is very different from adversaries controlling your crypto. If things have deteriorated that far, you might just as well “send in the clear.”

Well they have deteriorated that far and have been that way by default since atleast the 1960’s if not a lot earlier. With nearly all communications “standards based” they are not under you control and in all but a few cases legaly you have no choice but to use “standards based” communications. So I would take it as a given.

Remember just because you can see the crypto algorithm on paper and verify it’s being used does not mean the algorithm or it’s implementation is not “backdoored” in some way. AES almost certainly was, the NSA ‘advised’ NIST on what to do in the AES competition. They relied on the fact that the majority of the people –including those cryptographas that supplied the contestant algorithms– either, did not understand or did not care about,

Security -v- Efficiency.

Thus the NSA rigged the competition so that the algorithms had to be fast for high capacity comms systems but low ROM footprint for embeded systems and low gate count for ASICs etc. Also all the algorithms to be openly published.

The result programmers not understanding crypto or how things like timing could be used for creating covert channels, pulled either the very fast or the very small code into their libraries. The result most practical implementations of AES were “backdoored” by default. Worse even if you implemented your own crypto securely how about the person at the other end?

That is the state of reality there is nothing you can do to change that other than treat all such “standards based” communications as “Open Broadcasts, with no security”.

As some one once remarked “suck it up and live with it”, tough but true.

But do you have to live with the implied lack of security?

The simple answer “NO”.

That is just treat it as though it is compleatly insecure and put your own End to End crypto over the top.

Easy unless you live in say France, where untill not so long ago the use of “unaproved crypto” was a significant criminal offence, and there are far worse places than that…

So you also have to assume the “authorities” are going to look at your sent data and if it does not look like innocent plaintext to them, then you are going to get treated to a big helping of “personalised interest”.

So your next option is the obvious and mostly easily detected “stenography”.

At which point somebody is going to get close to a “$5 wrench” or “rubber pipe” or “involuntary snorkeling” the latter a standardised communications system in the USA…

To many it looks hopless at this point, but it’s not and we’ve actually known this publicaly for just about fourty years…

Remember a point I made a week or so ago?

It’s about Gustavus Simmon’s “Prisoners Problem” and “subliminal channels”[1] paper presented at Crypto 83. whilst he was talking about hiding information in a Digital Signiture schemes you should realise that,

“Signature algorithms like ElGamal and DSA have parameters which must be set with random information. He shows how one can make use of these parameters to send a message subliminally. Because the algorithm’s signature creation procedure is unchanged, the signature remains verifiable and indistinguishable from a normal signature. Therefore, it is hard to detect if the subliminal channel is used.

Is broader in scope than just digital signatures.

Therefore,

Any plaintext message with “structure” that contains “random elements” that can be set by the sender can be used to implement “subliminal channels” to covertly send information in “plain sight”.

Which leaves the problem of,

Therefore, it is hard to detect if the subliminal channel is used.

How to change “hard to detect” to “impossible to prove”?

Well the proof of security of the One Tim Pad is best stated as,

“All messages of the same length or less are equally probable.”

If they are “equally probable” then there is no “distinguisher” so there can be “no proof” within the message.

Thus as long as the rest of your operational security (OpSec) stops “distinguishers” being established by observable “cause and effect” there is no proof.

Oh and the use of the OTP also gives the first party protection against second party betrayal[2], by giving them full deniability. Which you do not get with any other crypto system as they are “determanistic” thus have inbuilt in-band distinguishers.

So even though OTP’s are generally talked down by cryptographers they still have advantages over not just their published disadvantages but also over all “Block Ciphers” and most “Stream Ciphers” even if those ciphers can not –yet– be broken.

So the last issues to talk about are the required “structure” and “random” to get such a system to work. Whilst not immediately obvious to many, to communicate useful information you have to have them both, you can not avoid it.

Information theory clearly states early on that the information content of a message is based on the potential entropy in the message Claude Shannon made that publically clear back in the 1940’s. But prior to that it was shown that information requires structure otherwise it is just unintelligible noise.

Look at it this way, If I send you a lengthy stream of bytes without apparent structure then at best it is just “unknown data”. It needs “meta-data” to give it meaning. Meta-data is at some level “structure”, when you think about it you realise it has to be.

Whilst the meta-data can be in-band or out-of-band to minimise errors and maximise the actual information carrying capacity some structure has to be in-band, as a consequence of the laws of nature (speed of light, black body radiation, radiation transport, thermal noise, etc).

It’s important to understand because ultimately you arrive at a proof that as long as you can communicate information, you can put covert information in the information being sent.

At which point it is “game over” for either “Communications” or those authoritarians who want “No secrecy”. As society can not survive even a few days without communications, it’s fairly clear who is going to be the looser as in self defence society will destroy the authorotarians if they try the “No Communications” route.

So some things you can get if you wish to but in the effort, to consider,

1, Full deniability even against second party betrayal.

2, Full deniability of use as there are no usable in-band distinguishers.

3, Will work in any communications channel where the first party can select random elements. Which ultimately is all communications channels.

So if you have any communications you can have secure communications within it and can not be stopped from so doing. However the bandwidth might be extrodinarily low, but that’s another issue.

This of course tells you two other things,

1, No system or entity can be trusted.
2, No system can be guarenteed secure.

But then we knew that anyway 😉

[1] Over view of Gustavus Simmon’s “Prisoner’s Problem”,

https://en.m.wikipedia.org/wiki/Subliminal_channel

[2] The “Second Party Betrayal” attack model protection is not as far as I am aware ever taught to students or in cryptography books. Because I suspect it’s been thought of as “impossible to solve” thus just assumed to be a “given”. Well as I’ve indicated it’s not.

MarkH June 23, 2021 10:13 AM

@Clive:

Are you with me so far?

So far, I’m lost at every step. If you’ll be patient, maybe I can incrementally improve my understanding.

The first part of the reply you offered seems to assume that a premise is false, which doesn’t look exactly like Euclidian reasoning to me.

Here’s an assertion:

IF predicting keystream bits is more expensive than an attacker can afford,

AND the target ensures that keystream is never reused,

THEN there exists no plaintext which will aid the attacker in cryptanalysis of any ciphertext.

Apart from claims that the predicates are unrealistic, why is this assertion false?

========================

More than once, you’ve offered some version of “If you combine the outputs of two identically tapped lagged Fibonacci generators such as LFSRs, all you get is a delayed version of the stream.”

First, I’d be grateful if you could point me to a reference on that; I’d like to understand it better, and especially the meaning of “combine” in that formulation.

Second, it seems to set up the following paragraphs about encryptions at various offsets, which I didn’t get at all. It seems to me as though the reasoning presupposes some form of keystream re-use.

I’m sure there’s more to the reasoning than I grasped; I’d like to understand it.

JonKnowsNothing June 23, 2021 11:07 AM

@MarkH, Clive, All

re: So far, I’m lost at every step.

Well, this makes me feel much better… 🙂

Although my common state is “not knowing”, it helps to know I’m not the only one reaching for Maths references and Wiki explanations. I even pulled out my old Knuth books recently … 🙂

It’s all fun though, as I’m getting a nearly free education of high quality, the kind I never was able to afford doing the work-school-work shuffle years ago. Even grinding though an advanced degree there was a serious gap between what you needed to know to graduate and what you needed to know to get a job.

For practical reasons, like not starving, fixating on getting employment won out over my personal preference to have stayed longer and immerse in the more nuanced aspects of “everything”.

I am enormously grateful for all the detailed commentaries. It might take me a bit longer to get from A to Z but as I no longer have to meet any project timetables, I can get there in my own good time.

Clive Robinson June 27, 2021 4:49 AM

@ MarkH, JonKnowsNothing, ALL,

So far, I’m lost at every step

OK, lets try a practical case that comes from the real world.

For many years the Five-Eye nations from the British side used a One Time Pad based paper tape “Super Encipherment System” on “link traffic using the Rockex[1] British Inter Departmental(BID) system I’ve mentioned before.

Unbeknown to many it has a secret that is not at all well known.

The OTP is fine for link encryption but has a KeyMat issue as all OTP systems do of shipping sufficient quantities of KeyMat by the shipping container load… So many people have foolishly assumed OTP=Stream Cipher therefore you can use a Stream Cipher instead of an OTP for link encryption.

You can not especially any stream cipher based on LSFRs.

This knowledge was a state secret for many many years and is still not realy realised by many people much to the joy of certain people. It’s clear that some Crypto AG equipment supplied to many “countries of interest” had this failing but in a more subtal way.

You can look the attack up but many just assume it’s a “nothing burger” attack…

It’s known as a

“Chosen Plaintext Attack”

And it’s realy quite devustating against LFSR systems because of their addative property that gives a “shift in time” thus easy raw key stream.

All you need is two ciphertext messages that are the XOR of each other that get sent at two diferent times.

The result is that if you know those two times then all you need do is XOR the two outputs of the Super Encipherment to get a nice clean length of LFSR generated Key Stream that may or may not yet have been used… Thus you can run it against the broadcast link traffic till you get a change in statistics that indicate you’ve a “hit”.

Or you can analyse it as it does not contain any plaintext just KeyMat.

Does that make things a little clearer?

Such an attack obviously does not work against the OTP because no matter how you shift them in time they are fully independent of each other.

Likewise some CS-PRNG’s used to make Key Stream Generators are not susceptable to this sort of attack if certain precautions are taken.

[1] It’s an unwieldy beast of a machine and had some interesting inovations to try and get around what we would once have called TEMPEST issuse but now EmSec issues,

https://cryptomuseum.com/crypto/uk/rockex/index.htm

Thanks to US FOI we know about it publically, otherwise the UK Gov would still have it as one of the highest of State Secrets…

MarkH June 27, 2021 1:43 PM

@Clive:

Does that make things a little clearer?

Not yet for me … probably I’m too dense to work it through, without more details.

What might help me a lot, would be a reference to a precise mathematical definition (or better yet, derivation) of the “shift in time” characteristic, particularly as applied to outputs derived from more than one LFSR. I’m searching …

As a reminder, SpaceLifeForm wrote a proposition concerning stream ciphers in general, not limited to those constructed from LFSRs.

Clive Robinson June 27, 2021 5:35 PM

@ MarkH, ALL,

First, I’d be grateful if you could point me to a reference on that; I’d like to understand it better, and especially the meaning of “combine” in that formulation.

Look up “auto-correlation function” for the likes of CDMA and Spread Spectrum systems, there is often explanations attached via the likes of “Gold Codes” and “JPL Ranging Codes”.

Otherwise what do you understand about Chinese Clock mathmatics?

The simple case is imagine a system like an old fashiond car odometer or trip meter. But rather than have the first wheel drive the second and so on that is mechanically difficult/problematic. Instead imagine all of the wheels driven together.

If all say six wheels have the same number of teeth then they all rotate at the same rate as each other, so even though you might have six wheels they all stay synched together and you only get a count of ten before all the wheels start exactly the same count cycle again not a count of a 10^6.

Now imagine just two wheels one that has ten teeth the other eleven. They both start at zero but when the ten teeth wheel gets around to zero the eleven teeth wheel now shows “10”, the next time the ten toothed wheel shows zero the eleven toothed wheel shows “9” and so on eventually after 10×11 steps both wheels show “0” again.

As long as the wheels have teeth numbers that are “relatively prime” to each other then you get the full count.

Without going into fairly labourious logic maps and equations it can be shown that a shift register with a feedback through a “half adder” or XOR gate at the right “primative polynomial” taps goes through all the values except the all zero state which is “self replicating because there is no “1” at either input to the half adder to make it output a “1”.

It’s again not hard using laborious logic maps to show that “two identical addative streams, when added together, give the same stream but shifted in time”.

If you want to demonstrate this to yourself look up a four bit LFSR and write it’s entire stream down on the edge of two pieces of paper. When they are aligned and you add/XOR them together you just get a stream of zeros, when they are shifted and added you get the same bit pattern only shifted in time, which has an average of 2^N/(2^N-N) (hence the auto corelation function).

You can show similar properties for Chinese Clock counting, and also conventional counting where sequence addition is used. If you want to dig deep look up “Cayley tables” on “Abelian groups” that come about due to the way we define the behaviour of addition over communative rings and fields that are prime (Zp). Oh and as a consequence “Two’s complement” works as does complements in other bases. Also groups based on “primative polynomials” are Abelian groups hence the applicability to LFSR’s.

I guess the question is “How far down that rabbit hole” do you want to go?

Most read around the first ten pages in a book like “Foundations of Logic and Mathematics”(Y.Nievergelt) or “Rings, Fields and Groups”(R.B.J.T.Allenby) and decide they’d rather just stay in the light…

But, there is a rather more prosaic reason for not trying to do it here. That is this blog does not support the required symbols to tie up with those you will find in maths books.

Weather June 27, 2021 6:47 PM

@clive
I aculty understood that one post. Side thing..

^
~
|
&
In a byte range say
01000010
11100001 ^
10100011 ~
01011100 |
01011110 &
01000000 inverse, aes I know is block but

Ror17
^
~
|
&
32 bit input, instead of byte, and cross it with 3/5 2/7 4/6 ror to get a new one.

MarkH June 27, 2021 7:02 PM

@Clive:

I’m trying to “boil it down,” and will offer here what I think I got:

Suppose that from a single LFSR with output period P, a record is made of distinct sequences X and Y of length n < P .

If bit sequence C is formed by the bitwise linear combination of X and Y, it will contain some bit sequence of the LFSR output that was in neither X nor Y.

In other words, capturing parts of the bit stream can reveal another part which was not captured.

[A] Am I getting that right?

[B] Is this part of some stronger or more general argument?

In terms of application to cryptography, I can imagine this property being used for an attack, especially in the autocorrelation shift is known to the attacker … provided that the output of a single LFSR was used as a keystream.

However, no educated person uses the output of a single LFSR as a cryptographic keystream, so the hypothetical above is not useful.

I’m sure you must have some realistic scenarios in mind, and that I haven’t grasped them.

Clive Robinson June 28, 2021 2:17 AM

@ MarkH,

In other words, capturing parts of the bit stream can reveal another part which was not captured.

Now take it on a bit.

You’ve got C=X+Y, from just two messages being sent.

Now think D=X+C, E=Y+C, thus the question of how far you can extend the addition and what you can get from it.

It turns out a lot have a look at how the Walsh transform is constructed.

If you look into the Berlekamp-Massey algorithm and how it uses “gaps” to measure linear complexity and build an LFSR that approximates the sequence Sn you start to see “observe” some correlations.

Which brings us to,

However, no educated person uses the output of a single LFSR as a cryptographic keystream, so the hypothetical above is not useful.

But you’ve also said of the shrinking / decimating generator,

It’s wonderfully small and simple, made of two LFSRs. One is the source for keystream bits; the other functions as a “gate”, with raw outputs of the first either being used in the keystream, or discarded.

What is the shrinking generator in the worst case?

That is what if in effect I put all zero’s in the second LFSR?

Or I pick tap and start values for the second LFSR such that I get two or three “long gaps” early on?

I’ve already shown that a “chosen plaintext attack” can be used to strip the plaintext off by just sending two messages that are the bit inverses of each other.

So if I know or can find out how long the “gaps between ‘gaps'” are I can build most of the 2N bits needed for the Berlekamp-Massey algorithm to work.

Whilst the shrinking generator algorithm might in theory be secure, the impementation need only add “jitter” to the edge of a pulse when the shrinking generator second LFSR changes state. If you doubt this I’m reasonably certain that the NSA manipulated the AES contest to deliberately ensure implemebtations would have covert time based channels leaking information.

Why, as I’ve mentioned before one of the original failings of the One Time Tape super encryption machines was the “pull-in and drop-out” times of the circuit used to perform the XOR function. All you needed to do to strip the One Time Pad tape off was observe the teletype pulse widths on the line with an oscilloscope… Both of the agencies that preceeded the NSA and GCHQ were well aware of this… They have exploited various “key strength” tricks not just with the systems they issue to their own Military for field ciphers but equipment supplied to foreign goverments via Crypto AG. Thus to say they have “previous” is an understatment. Thus rigging the AES competition is very very much in line with their historic MO.

It’s why having any part of your crypto under somebody elses control,

1, In any way you can not fully see

and

2, You do not fully understand

Is a real danger. Which is the case with ALL current consumer level crypto in use for communications.

Arguably even the “open community” does not “fully understand” what the SigInt agencies currently do, thus we fail at point 2 and will continue to do so almost indefinitely.

It’s why I say,

“Assume any communications channel to be a plain broadcast channel and mitigate appropriately”.

That is just as Diplomats and the Military have for centuries, put your own layer of crypto over the top and assume everything below it is “untrustworthy”.

It’s why I’ve continuously said on this blog and other places “Do all your encryption/decryption Off-line, never On-Line” and “Use two energy-gapped systems, one for your work and encryption, the other as your communications end point”.

Because if you do not you have no security thus have no privacy and potentially all the problems that will cause you.

But as a continuing thought, ask yourself this,

“How nonlinear is decimation of an entirely linear function?”

I have my own thoughts on that and it’s a trade off in many ways.

Clive Robinson June 28, 2021 2:58 AM

@ MarkH, ALL,

A couple of papers on the Berlekamp-Massey algorithm.

The first helps explain it in a way many find useful,

http://hlombardi.free.fr/publis/BMAvar.pdf

The second extends it in a way that can be most useful,

http://neilsloane.com/doc/Me111.pdf

But also have a read of “Handbook of Applied Cryptography”(Menezes et al) chapter 6 §6.2.3 and play “paper computer” with the algorithm §6.30 (yes the run time is O(n^2) so pick short runs of Sn bits).

MarkH June 28, 2021 3:50 AM

@SpaceLifeForm, Clive:

We’re having some very interesting discussion about some side roads. For the sake of focus, I want to take a few minutes to get back to the principal concepts:

• by definition, within the period of a strong stream cipher, examination of any number of keystream bits does not enable prediction of even one other keystream bit by a feasible amount of computation

• if keystream extrapolation is practicable, then the cipher is broken (not just in the pejorative sense, but cryptographically broken) — DON’T RELY ON BROKEN CIPHERS

• if you’ve chosen a strong cipher, you don’t need to waste time worrying whether any qualities of the plaintext will affect security; if it’s a broken cipher, don’t imagine that playing around with the plaintext will save you

• if you’re trying to arrange your plaintext to “protect” your cipher, you’ve just put your pants on backwards

• “security by obscurity” is not generally recommended with good reason (people try it every day, with frequent disastrous outcomes)

• as Clive wrote, if for whatever reason you don’t trust the provided crypto, treat it as an open channel and implement the security protocol of your choice on top of it

========================

It’s been common knowledge for many years that for an LFSR with output period P, any consecutive 2 log2(P) bits can be used to compute the connection polynomial, and thereby the entire output stream.

Does the autocorrelation property yield a better attack than that, should someone recklessly use a single LFSR output stream for a cipher?

========================

Side channel attacks are surely a critical security matter, and also a completely distinct topic from algorithmic attacks based on data-at-rest.

Bruce’s post is about the suspected weakening of a cipher with respect to data-at-rest attacks.

For me, clarity of understanding is aided by keeping the apples and oranges in separate bushels, rather than mashing them together in the food processor.

MarkH June 28, 2021 3:58 AM

@Clive:

Thanks kindly, for the links to Berlekamp-Massey presentations.

To my understanding, the Berlekamp-Massey algorithm is what transformed Reed-Solomon codes from an exotic technique used for deep space probes, to the ubiquitous technology inside countless millions of consumer devices (and one of the embedded products I worked on, thanks to a very nice open source Reed-Solomon implementation by a German author).

Clive Robinson June 28, 2021 8:37 AM

@ MarkH,

within the period of a strong stream cipher, examination of any number of keystream bits does not enable prediction of even one other keystream bit by a feasible amount of computation

Please do not say “strong” it has no meaning the same as “short or long” when talking about a bit of string.

By definition only one type of stream cipher meets,

examination of any number of keystream bits does not enable prediction of even one other keystream bit

And that is one where all bits are fully independent of each other. That is “not determanistic in any way”… Which is the equivalent of a One Time Pad.

What you are trying to argue is that the linear complexity L(Sn) over S0…Sn-1 requires a significant level of resources for a significant period of time to determine. The problem is in the Open Community it is not as well researched as we need. It’s the reason the Stream Cipher finalists in the EU NSSIE competition were basically a bust.

You can show that the size of the State Array has some influance on the size of the “gaps” used to calculate L(Sn) but realistically the State Array update mechanisum is always reducable to a purely linear function thus it is always going to be possible to determine.

To prevent or delay this determination you have a generator output function that is all to frequently a static mapping of what is hoped to be a “one way function”. There is no proof that one way functions actually exist, or that they can not be reduced to a series of successive linear mappings by the use of certain algorithms that “guess out” nonlinear behaviour from linear behaviour.

The only solution is to make the mapping somehow non determanistic. In block ciphers one way is to mix the plain text with round keys before putting it through what are hoped are one way functions.

The general case with stream ciphers is to use a purely linear easily reversable mixing function after the generator output function. Thus this very very simple substitution cipher ensures that the plain text can easily be stripped by various real world attacks.

Thus the entire strength of stream ciphers relys on what are little more than very large invarient substitution ciphers that are made to look polyalphabetic by the equivalent of a counter…

Due to the size of the needed total stream space most such mappings are going to have quite a large linear component. Thus the question of the coresponding linear complexity L(Sn) arises virtually every time, and the quality of the thin veneer of nonlinear complexity that overlays it…

Because once that thin veneer is breached it’s just the linear complexity L(Sn) that we know needs only 2n bits and ~O(n^2) or less effort…

We know that the State Array will have “gaps” in the range n-1 to 2n-1 ordinarilly and can be as high as 3n-2. A good design would always try to keep the “gaps” down as close to n-1 as possible for obvious reasons. However how do you tell the difference between a good design and one that mostly looks like a good design?

MarkH June 28, 2021 10:50 AM

@Clive:

It seems to me that “strong” is a cryptographic term of art, or at least used to be.

With suitable choice of parameters, RSA and DH are strong because after decades of analysis, public academic research has discovered no way to cryptanalyze them (with respect to data-at-rest) by a feasible amount of computation.

Likewise, Rijndael, Serpent and Twofish are strong in this sense.

Any pseudo-random bit generator can be used as a keystream generator. Notwithstanding that it’s absurdly slow, Blum-Blum-Shub can be used for a stream cipher. Although BBS is fully deterministic, nobody has yet found a feasible method for predicting bits without knowing the secret.

The six algorithms named above can all be cryptanalyzed in principle; all of them remain computationally secure.

========================

Experience has shown that designing a stream cipher to be fast and easy-to-use is a very hard challenge.

It seems quite plain to me that wherever strong block ciphers are practical, they are preferable.

Clive Robinson June 28, 2021 11:16 AM

@ MarkH,

Not sure if you will be able to read this or not,

https://www.sciencedirect.com/science/article/pii/S0885064X15001338

In essence if you taken an LFSR and add a non-linear combining circuit to it you effectively make a “filter”

The down side of such filters is they are vulnerable to algebraic or spectral attacks that alow an attacker to effectively linearize it thus make it amenable to linear attacks.

MarkH June 28, 2021 2:26 PM

@Clive,

I can indeed read it, thanks … though it took a while to figure out the PDF download.

Whether I will ever comprehend it, is a darker question.

This analysis has been public for more than 5 years. Has it been used for cryptanalysis of any published cipher?

Weather June 29, 2021 2:46 AM

@clive, mark, others
What book or formula would you use on a line graph to exsopse patterns, like mean,log or combination.

Thanks

Clive Robinson June 29, 2021 10:04 AM

@ MarkH,

This analysis has been public for more than 5 years. Has it been used for cryptanalysis of any published cipher?

Probably, but not quite in the way you mean.

If you look through the list of refrences you will find a paper[1] by T. Helleseth, S. Rønjom the underlying work of which was used successfully against an eStream candidate. Our host @Bruce having been involved with eStream may well be able to say more.

But the paper is realy a description of a “tool” from which the tool can be built and extended (the idea is actually fairly simple when you get your head around the 20,000ft view). Most cryptanalysis tools these days especially for stream ciphers tend not to be used in attacking known cipher algorithms in a public way.

They tend to be used in a not to disimilar way to George Marsaglia’s “Diehard(er)” and later tests (now maintained at Duke Uni[2]). At the lowest level by students cutting their teeth on their own simple designs, through to people working in telecommunications companies designing proprietary systems, where for some strange unacountable reason the myth there is money to be made in patented etc crypto algorithms tends to hang in by it’s fingernails[3]…

[1] T. Helleseth, S. Rønjom, Simplifying algebraic attacks with univariate analysis, in: Information Theory and ApplicationsWorkshop,ITA,LaJolla,CA,February6–11,2011,pp.1–7

[2] https://webhome.phy.duke.edu/~rgb/General/dieharder.php

[3] This is one of the unstated reasons behing the US “War on 5G”. Put simply the US think they can force US patents on the world via 6G or some other standard and sit there and rent seek their way to world dominance… They tried it in the past and it’s just one of the reasons closed US Cellular Systems Standards died whilst open EU Standards became preeminent. A lesson from “living history” people need to be aware of, and watch out for the legal shenanigans that US industry lobbyists and legislators are going to try and unleash in the near future.

Clive Robinson June 29, 2021 10:54 AM

@ Weather,

What book or formula would you use on a line graph to exsopse patterns, like mean,log or combination.

Nearly all the tests are statistical in nature and produce a probabilistic output on a very long sequence of generator output either as bits or in the alphabet set (A{a0…an}).

The best place to start getting a grip on it would probably be §3.3.4 The Spectral Test in vol 2 of Knuth. Then move upwards to the Dieharder etc tests.

The point being humans can only see points, lines, planes and manifolds on graphs which means at most 4 dimensions… Spectral tests go through many many dimentions looking for statistically significant patterns…

The important point to get your head around is that all deterministic generators at the end of the day consist of three basic parts,

1, A State Array
2, A State Array update function.
3, A Generator output function.

Often 2 and 3 are partially or fully combined. That s those functions are the equivalent of look up tables or mapping functions.

For human convenience in describing 1&2 we talk about either a “counter” or an “LFSR”, what it actually is does not matter in the slightest as any such state array and update function can be shown to be directly equivalent to a linear function acting on the bit array thus equivalent to a linear counter or linear LFSR and mapping function. Any non linearity gets moved to the right and becomes a map on it’s own that is added onto the linear mapping function at the left.

cnt+1 = f(cnt)

Were f =(Nmap(Lmap(x))

Thus the trick is to strip off the nonlinear map (Nmap) or extract any linear component and add that to the linear map (Lmap) in a successive process.

The thing is it has a very very high probability of success of reducing the nonlinear map (Nmap) down to a very very simple and sparse map which is well defined either in part (short sequence) or in whole (full sequence).

The downside is an “exponential run time”. However a “Brut force attack” is effectively exponential run time as well. Each linear map the algorithm pulls out knocks a big chunk of that exponential time off. Thus these attacks against generators will almost always run faster and with less resources than a brut force attack thus technically at least “Break the algorithm”.

The obvious and wrong idea is to try to make what appears a highly complex nonlinear map, often by random selection or gut hunches etc… The thing is that if you look at linear functions like the Half Adder whilst they may be made of nonlinear components their map is linear.

It’s actually very hard to come up with a nonlinear circuit where the output does not show bias that will show up in statistical tests.

The easiest way you can start is by not using nonlinear logic maps but permutation maps (card shuffling algorithms). Yes permutations have their own problems to deal with but as a starting point they are often easier in software and faster than logic maps, though slower than a two legged dog in hardware…

Weather June 29, 2021 12:38 PM

@clive
If I under stand what you said, if I look for lines next to each other were they differ by large amount or small we it forms non linearly flows say using ones right next to each other over three lines and then do it again but skip ever second then ever third… If there’s any linear part add it to the lmap.

Might not have explained to you, just woke up and heading to work, but thank you ,you gave me something to look at, and I’ll add Knuth’s book to the to do.

Clive Robinson June 29, 2021 5:13 PM

@ Weather, ALL,

you gave me something to look at,

You might find this a more gentle introduction to the subject than the previous paper,

http://cacr.uwaterloo.ca/techreports/2009/cacr2009-04.pdf

Note in particular the comment at the bottom of page 2 as to why spectral attacks are shall we say more interesting.

I guess the big question some will be thinking is “Does this work back on Block Ciphers?” simple answer is yes it can do depending on how you use them. Also “What about Quantum Computer attack hardened cipher primatives?” well my knowledge in that area is insufficient, but my gut tells me that if it’s possible someone is going to find it sooner rather than later, if they have not already…

Weather June 30, 2021 1:01 AM

@clive
Finished reading the cacr2009 PDF, I’m pretty sure the head ache wasn’t from the beer, I cleaned three point which is what you explained, the linear 4,12 with two binary values 4 has a lot of LSB but no msb ,12 is even spaced,% what is the harder one,4 or 12?
The second what you said about finding lmap and nmap with j0-j5.

The rocket propulsion book I have uses that math and I’ve spent 15 years reading it…

Cheers

Clive Robinson June 30, 2021 2:48 AM

@ Weather,

The rocket propulsion book I have uses that math and I’ve spent 15 years reading it…

To mix up a couple of quotes,

“Nobody said you had to be a rocket scientist to be here, but sometimes it does help!”

Speaking of rocket scientists one of my favourit observations on the life technical and personal lab equipment, is to be found in John Drury Clarke’s book “Ignition”. Often it is quoted in two seperate parts (highlighted) but the whole thing is worth reading,

“It is, of course, extremely toxic, but that’s the least of the problem. It is hypergolic with every known fuel, and so rapidly hypergolic that no ignition delay has ever been measured. It is also hypergolic with such things as cloth, wood, and test engineers, not to mention asbestos, sand, and water—with which it reacts explosively. It can be kept in some of the ordinary structural metals—steel, copper, aluminum, etc.—because of the formation of a thin film of insoluble metal fluoride that protects the bulk of the metal, just as the invisible coat of oxide on aluminum keeps it from burning up in the atmosphere. If, however, this coat is melted or scrubbed off, and has no chance to reform, the operator is confronted with the problem of coping with a metal-fluorine fire. For dealing with this situation, I have always recommended a good pair of running shoes.

Apparently the word “golić” is Polish for “shave” thus I guess hypergolic could also mean “A very close shave”…

A friend who went to Birmingham University in the UK (place of the last known smallpox death in 78), went to do chemistry but came out doing computing…

He pointed out that using a phrase such as,

“It is, of course, extremely toxic, but that’s the least of the problem.”

Is much frowned upon, as it might after all discourage people. So the prefered phrase is,

“Known toxicological disadvantages”

Which apparently confuses first year students, so Chem Profs are known to say,

“Oh don’t worry about that, it just means it kills people”…

Weather June 30, 2021 7:38 AM

@clive, others
Can two data points be linear or do you need three at minimum, say 33 and 37? Median 32 max 42 min 27,with other data?
I ask because some of the data might be 31(0),39(1),32(2).
There is normally outliers in the data I’m working on, would you use them as anchor points, or signals, or just treat them the same as the median +-.

Clive Robinson June 30, 2021 9:20 AM

@ Weather, ALL,

Can two data points be linear or do you need three at minimum

The number of points you need is dependent on the real “transfer function”.

Frequently what is done is the real transfer function is plotted then a lookup table is generated for the software that does “piecewise linear approximation”

You will need an absolute minimum of three points on each one of those piecewise linear approximations.

Just to check the circuit is acting within the norms of it’s transfer function. Then make any offset shifts and gain adjustments to get the actual required values in range.

You then need to take a series of readings around “a point of intetest” such as the pass/fail limit.

The reason you need to do things this way is,

1, You want the pass/fail point to be accurate to detect the crime.

2, You want the magnitude of the fail to decide on sentencing as in “The driver was recklessly three times the legal limit”.

3, If readings are within certain ranges the tests should be reperformed on other equipment because (1) could be in doubt.

4, Likewise if readings are not consistant with the state of the person then there may be another metabolic cause such as low or high blood sugar both of which “can not be slept off” just buried under 6ft of dirt…

Metrology is a fairly complex subject in it’s own right and mostly this is kept away from “non domain experts” because even with nice graphs people get glazed eyes.

Judges especially, hate science, what they want is a piece of paper with numbers that appear to be “evidence” not be told “Well there is a one in X chance the reading was incorrect because it was below the acceptable temprature range of the equipment…” or similar.

Judges hate scientists because as “experts” they can in effect “direct the judge”. If you realy want to see a seriously upset judge, just wait and see one who has just been told by a Chartered Accountant “the tax law says…” when the judge wants it to say something else…

When a scientist tells a judge the laws of physics etc say “that is not possible, the evidence provided by XXX is not factualy correct” it can get very very unpleasant…

One of the big issues with the US legislative and regulatory system is it prescribes tests not science. Thus if a new better more accurate test comes along and you are guilty by the old inacurate test but innocent by more modern more accurate tests, the legaslative and judicial conceit is you are guilty “because that is the way we’ve done it in the past”.

Judges “judge definitively” based on what it says on a “pieces of paper”, never ever make the mistake of thinking otherwise… You give a judge two conflicting pieces of paper and they are not going to behave rationally in many cases. Basically they fall back on the fallacy of “the preponderance of evidence” which is a bull5h1t way of saying they’ve stacked a bigger pile of cr4p up than you have…

https://www.merriam-webster.com/legal/preponderance%20of%20the%20evidence

Not only do they do this in “civil cases” which is bad enough, they do it in crininal cases as well… On the excuse it is “just one part of the evidence” so is some how contained in it’s effects on the outcome of a trial…

Oh and don’t forget “expert witnesses” lie all the time, they “simplfy for the jury” or similar nonsense…

For instance, if I say when talking about a paint fleck “the paint is a non standard green, and the defendent made it up from two cans of paint”, it might sound convincing but it’s almost certainly a compleate nonsense for various reasons.

The FBI pulled a similar stunt for years over matching bullet fragments by metallurgical content ratios. They never did the tests to show if their assumptions were valid or not. When somebody did and called “bull5h1t” did they appologise? Not that I can remember…, did anyone get sacked, demoted, lost their pension… I think you can guess the answer to that.

JonKnowsNothing June 30, 2021 11:34 AM

@ Clive, @ Weather, All

re: The rocket propulsion book I have uses that math and I’ve spent 15 years reading it…

I think some of us have similar books…

Personally, I have an horse training book that I’ve been studying for decades. I have almost grasped “The Walk” and am part way through “The Trot”….

The over view is easy and the basic practice isn’t hard to achieve. However, to be on the higher end requires more thought and serious contemplation of physics and body mechanics for both the horse and rider.

The horse cannot respond to the rider if the rider asks something that is not physically and mechanically possible at that moment. Asking at the right time, when everything aligns, makes a big difference.

People ask me “How long will it take to learn to ride?”
Answer:
  I can teach you to stay on the horse in 2-3 days.
  It will takes a lifetime to learn how to ride.

===

  • Give Your Horse a Chance: A Classic Work on the Training of Horse and Rider
    by Lt Col a L D’Endrody originally published after WW2, republished 2012.

Clive Robinson June 30, 2021 2:44 PM

@ JonKnowsNothing,

People ask me “How long will it take to learn to ride?”

I know the answer to that one in my case it’s “never”. To be honest I’m six foot six fairly broad and somewhat solid even as I approach my dotage eminent or otherwise…

Lets just say it’s unlikely that I could find a horse to learn on that would be large enough not to be injured if I did get on it’s back… Which lets be honest would be further off of the ground than I would like and there is that V^2 element to consider when you do inevitably go splat…

But to be honest I’ve looked in a horses eyes a number of times and I’ve been left with the distinct feeling that most horses are one shake of the mane away from madness. So hight and madness is a combination not that far from defenistration in my book…

I could be doing them an injustice, and it’s just that they have a hightened “flight response” but that’s not realy much better. Do you realy want to be sitting astride a creature that might suddenly take off like you’ld used it’s tail as the blue touch paper?

So no, I’m not climbing up, as I figure I’ll come to less harm if I don’t.

MarkH July 2, 2021 11:41 AM

@Clive, SpaceLifeForm:

As I wrote above, I find it better to keep categories separate.

Apart from general considerations of stream ciphers, I address specifically the Shrinking Generator (Coppersmith, Krawczyk, and Mansour).

Clive made several references to linear complexity, which can be defined as the size of the shortest LFSR capable of generating the bit sequence.

The Shrinking Generator paper addresses linear complexity directly, offering a formula with proof thereof. If |L| is the length of register L in bits, A is the LFSR generating the “raw” stream, and S is the LFSR generating selector bits, then the linear complexity of the Shrinking Generator output sequence is greater than

|A| • 2^(|S|-2)

For example, if a Shrinking Generator is composed of two 32-bit registers, then it would be necessary to collect several gigabytes of output in order to attempt an algebraic reconstruction based on linear complexity. Even were all that data available, the computation would be infeasible.

This is not say that other algebraic attacks are not possible! However, either the paper is wrong, or linear complexity is not the Achilles’ heel of the Shrinking Generator.

========================

The central concept — and per John Cleese, the “tricky bit” — of the Shrinking Generator is that each LFSR conceals the structure of the other.

Yes, they would easily fall in isolation. But the keystream bit sequence reveals only their combination

Clive wrote,

what if in effect I put all zero’s in the second LFSR? Or I pick tap and start values for the second LFSR such that I get two or three “long gaps” early on?

I’m confident he meant all ones … with all zeros, there would be no ciphertext at all, which is even more secure than OTP but not very useful.

I suggest two responses to those hypotheticals:

[1] If an adversary is controlling key selection, then no cryptosystem should be considered secure; if keys are chosen at random, certain special scenarios will happen (at very low probability). But if the key is secret, then how does an attacker know that the first part of the keystream has lots of contiguous A sequence bits? And very crucially, how does the attacker know where those bits fall in the A sequence?

[2] In a very conservative implementation, after the IVs are set in the registers, the keystream could be defined to start after some predetermined number of shift cycles.

========================

I don’t know how strong the Shrinking Generator is. The paper very carefully says that it is not theoretically secure.

It has stood for 27 years without (as far as I can discover) publication of any feasible attack, provided that the paper’s recommendation is followed to make the connection polynomials part of the key (secret, and variable).

Clive Robinson July 2, 2021 2:40 PM

@ MarkH,

However, either the paper is wrong, or linear complexity is not the Achilles’ heel of the Shrinking Generator.

The Shrinking Generator is now in it’s third decade, there has been NESSIE and eStream competitions in the last two decades. Each spawned new attacks, and “non linear algebraic complexity” is now believed not sufficient.

The likes of Sectral analysis and sequency/wavelet analysis have moved forwards and knocked chunks off of algebraic complexity in various ways.

Wavelet analysis in particular looks like it will be the next area where attacks against LFSR and equivalent systems move forward. It realy needs something to drive it forward (which crypto competitions have a habit of doing).

The thing is even though all stream generators can be viewed as a counter or shift register driving a mapping function, the question of the complexity of the mapping function comes up. Block ciphers and stream ciphers tend to use radically different mapping techniques, as do stream ciphers designed for software implementarion rather than hardware. Trying to make a mapping fuction that is all things to all situations is a recipe for trouble more often thwn not…

In effect the shrinking generator by being two more or less independent shift registers makes the mapping function rather more complex as it gives an increased degree of freedom. You can get similar effects using card shuffling algorithms. Where you use say a nonlinear function rather than an incremental pointer. So for arguments sake an LFSR with non linear filter to generate the incrementing pointer replacment.

The real question for theoretical attacks against the algorithm then becomes finding distinquishers and how you make the effects linear for easier analysis[1].

From an implementation perspective finding distinquishers can be trivial if there is some kind of time based channel. This was the issue with AES, the competiton appeared to push a strong algorithm, but realy encoraged weak implementations. The result a quater of a century of weak “on-line encryption” systems (hence the NSA only giving “data at rest” clearance with the likes of IME’s).

The likely hood of an “Open Community” break against the shrinking generator is not driven by security concernces outside of crypto competitions, but what will get your name not just published but associated with a new research area as it develops (get in whilst it’s small and your name grows with it via at a minimum citations).

Thus people playing with stream generators have in effect put the shrinking generator up on the shelf where it is gathering dust.

A student might take it down and dust it off but your name researchers are looking at other things such as Quantum Computer proof algorithms.

Thus the closed community of SigInt agencies may well have broken the shrinking generator or know how to make it weak under certain circumstances already. Because their research interests move under the force of entirely different drivers.

Don Coppersmith is a bright cookie but which side does he realy play for? Look back at the history of DES and what happened.

[1] A 2002 paper from Don Coppersmith and others where they attack NESSIE finalist Snow2.0 that used a cellular automaton as a non linear element ontop of a form of LFSR. Oh and quite a smack down on Don’s own earlier Scream cipher,

https://eprint.iacr.org/2002/020.pdf

MarkH July 2, 2021 10:41 PM

@Clive:

I have a different vision of how research projects are born.

Certainly an old stream cipher — probably little used because people don’t like its irregular output — is not a high priority.

However, grad students (and their teachers) perennially struggle to find projects that haven’t been done before.

“Why don’t you try this new technique on never-yet-broken stream cipher designed by a famous cryptanalyst? If you’re the first one to find an attack better than exhaustive search, it will boost your reputation.”

I found papers from a few years ago developing IV recovery attacks on the Shrinking Generator (of no practical use, I suppose, when the polynomials are secret). I don’t think this old cryptosystem is forgotten yet.

Clive Robinson July 3, 2021 7:56 AM

@ MarkH,

However, grad students (and their teachers) perennially struggle to find projects that haven’t been done before.

Sometimes I’m not surprised at all, especially when such projects actually go nowhere…

A little historical story for you with names etc witheld to stop blushes etc, but I will say it goes back oh just about all of four decades.

A person towards the bottom of the ladder was told a “fact” by a person somewhat further up the ladder. The lowly person doubts but was assured it was correct by the much more senior teller of the fact…

Well the low level rung holder still felt they were right and the senior rung holder was effectively wrong…

The “fact” is one you still here handed down even today in one form or another and “students hear”

“blah… blah… blah… since integer addition is nonlinear when considered over GF(2).”

Thus they assume incorrectly that “addition is nonlinear over GF(2)” is true for all additions, it’s not, and can be shown as such as the lowly rung hanger did…

The reality is somewhere in the middle… That is addition is sometimes non-linear and sometimes it is not and it depends on if your bit adders are behaving as a “half adder” that is linear, or a “full adder” that is nonlinear.

Why is a half adder linear well it’s an XOR gate and you can look the argument up as to why that is linear (or just draw out the truth table and observe the output in terms of number of set/clear bits, and more importantly the correlation between an input bit and the output bit).

The full adder however is both linear when acting as a half adder as you would expect and nonlinear when acting as a full adder (and has consequences we discussed the other day when talking about C’s lack of carry flag access). Thus what makes the difference is the state of the “carry in” bit… Which comes from the previous bit adder carry out.

So when you add two integers with no carry in the least significant bit (LSB) is always a half adder which has the same input to output mapping as the XOR, which is why the likes of the Mitchell-Moore Generator from 1958 gives a maximal length sequence in the bottom bit of the result as it’s actually a standard XOR LSFR.

Now if you thibk for a moment, you realise that when you add any integer to zero you get the same number and obviouly no carries in an of the bit by bit additions. Thus that set of additions is obviously compleately linear…

But now consider that carry goes from LSB to MSB. We know that the LSB result is always linear, we also know that carry is generated ONLY when both input bits are set, or the AND function. Which only happens for 1 of the four input states thus more likely than not carry will not happen. So no carry across to the next pair of bits… It’s fairly simple to show that if the full adder input Hamming weight is one then you get no carry.

I won’t go through doing the odds for the other bits but it can be seen that the “non-linearity” is going to not propagate more often than than it will, especially in the lowest bits.

Thus to find out where a run of carries starts AND the two integers together and upto the first set bit will be linear and the bit by bit additions will be independent of the preceading bits. That first set bit tells you where the nonlinearity starts in that addition and dependency on adjacent bits starts.

But a run of carries can and will stop when the next bits up are both clear… Something you can easily find by the logical OR of the two integers.

Thus you can come up with a simple “walk up” algorithm to count the actuall number of carries that will occur. This gives the “Hamming weight” that is the number of carries gives you the non-linear complexity of any given addition. Thus the amount of simple linear complexity thus the ability to simply attack an Addative-Feedback Shift Register (AFSR) can be calculated.

Something the lowly rung hanger kind of wrote up informally, but had realised that the more senior rung hanger was not going to be interested in seeing…

Well nearly a decade later this paper got published,

Staffelbach, O. and W. Meier : 1990. “Cryptographic Significance of the Carry for Ciphers Based on Integer Addition.” : Advances in Cryptology — CRYPTO ’90.

https://link.springer.com/content/pdf/10.1007%2F3-540-38424-3_42.pdf

Someone who had worked their way up the ladder somewhat by that point, sent the other rung hanger still with a death grip on the same rung as before an Email enquiring if they had seen the paper, and saying what a lost opportunity it was… The result that happened was not unexpected, so the world moved on around the death grip hanger on, who I asume has vacated their rung one way or another by now.

So in the rest of the world we now have the so called “Feedback With Carry Shift Registers” (FWCSR) that first fixed the LSB but later had to work with rather more bits.

In essence you do two adds, and add in something that changes dynamically. Which can be a delayed version of the previous additions etc or something more secure.

However all nice in theory, but in the real world adders are oh so slow especially when more than a couple of bytes wide, and they carry much higher gate counts than other non-linear circuits. So having many many adders in either a Galois LFSR or Fibonacci LFSR form as the “many taps to be secure” requirment calls for can rapidly break the “gate count budget” rather rapidly.

But even in software the algorithms are not that fast and starting to use a block cipher in counter or some other similar mode starts to look a better option as there are off the shelf macros or built in hardware etc.

The sad thing is, I’ve had similar academic experiences in several apparently unrelated knowledge domains. In fact when looking around to do a real PhD. I was horrified at the issues I crashed into. The main one being the “un-written rule” that “You are not supposed to do your own line of research, but do your supervisors line of research” on the excuse of “that’s why you are getting the stipend” etc. Oh and it you are lucky you might get your name mentioned on any paper they write out of your work…

As I’ve mentioned in the past I interviewed a lot of potential PhD supervisors and they realy were not what I would consider working with. One however at the LSE called me out of the blue… I’d been at an Xmas party and had been chatting to a rather nice Russian Girl who was doing her PhD. at the LSE, we got chatting and I mentioned to her about the problems I was having. She told me her supervisor called it “Academic Ma5turb4t1on” and we had a laugh about it and changed the subject and had a few more drinks and swapped numbers.

Thus the totally unexpected phone call from her supervisor we chated, we had a couple of meetings and it looked good. Unfortunately I’d have to be self funding, but that would not have been a problem as my then employer was prepared to have me work half time for the duration. Unfortunately other events happened and the world changed for the worst…

But hey I was never destined for the cut and thrust back-stabbing of academia, to much politics, boot licking and time serving, and not enough “fun” of finding out new and interesting things.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.