[cryptography] Duplicate primes in lots of RSA moduli

Marsh Ray marsh at extendedsubset.com
Wed Feb 22 16:55:39 EST 2012

On 02/22/2012 09:32 AM, Thierry Moreau wrote:
> While commenting about
> http://www.cs.bris.ac.uk/Research/CryptographySecurity/knowledge.html
>  , Marsh Ray wrote:
>> It talks about entropy exclusively in terms of 'unpredictability',
>> which I think misses the essential point necessary for thinking
>> about actual systems: Entropy is a measure of uncertainty
>> experienced by a specific attacker.

(Actually this was was a comment on

> I am curious that you seem to prefer the risk analysis definition of
>  entropy over the more general definition. I am rather confident that
> a proper application of the more general definition is more effective
> in providing security assurance: the future attack vectors are deemed
> to be unexpected ones.

Simply pragmatic utilitarianism rather than risk analysis per se.

I'm putting myself in the position of an engineer who's designing the
logic and writing some low-level firmware for the next consumer grade
$50 blue box home router/wifi/firewall appliance:

======= [cue dream sequence wavy blur effect]

I'm an EE many years experience going back almost, but not quite, as far
as the days of fully discrete logic designs. I've been part of the
current design team for 5 years now. I have a gray beard and drink 4
mugs of strong coffee a day. I like to read science fiction and handmake
acoustic guitars in my free time.

So far in my career, I haven't been involved in implementing any crypto
hardware. But show me the spec, and I can implement it in programmable
logic with minimal power consumption. No one can match my productivity
when I have my favorite toolchain. In fact, my last ASIC design passed
validation on the first run, saving the company at least $100,000.

My manager used to be a hands-on EE too, but that was way back before
LSI programmable logic. These days his favorite expressions are
"Sooner, cheaper, rechargeable: choose any two" and
"Don't make a Rembrandt".

This project is still in its early stages, and we haven't settled on a
supplier for the processor yet. Depending on the business people sales
volume projections, we'll probably end up with either a small embedded
SoC and a small glue ASIC, or licensing an IP soft core for a full ASIC

We have settled on the embedded OS though, and we even have a simple web
server up and running on a development board left over from a previous
project. This will probably be the basis of the end-user management
interface, now being designed by our product people and software developers.

[fast forward to later in the project]

So the UI is prototyped and now up and running. The firmware guy has
been helping the web UI people add support for HTTPS, using an SSL
library we licensed. It was chosen primarily due to its support for our
embedded OS.

The software people are telling me the library requirements state it
needs a good source of random numbers, and it would be best if I supply
them via hardware. They even made it a design requirement, there's a
first time for everything I guess.

My first thought is an LFSR-based pseudorandom bit stream generator
which I could easily fit into the available area on our ASIC. We used to
use these in some RF work, and also in toys for noise generators.

But looking at the requirements closer, they say that for cryptographic
purposes this needs to be something more then your ordinary pseudorandom
generator. It needs to produce "entropy". I haven't really heard that
word used much since school, I vaguely remember it being somewhat
mysterious. The term was introduced both in my advanced physics and
coding theory ...with completely different meanings!

Googling around for a definition of "entropy" I find the Wikipedia article:
But that's not so helpful. Certainly nothing there I can implement.

So I search for "random entropy generator" and I find
Turbid: A High-Entropy Randomness Generator

This article is great and I read the whole thing. It really appeals to
my inner engineer. Of course, our product will not have an A/D
converter, much less a microphone, so it's just a curiosity.

I notice the TI CC2530 uses a LFSR which can be "can be seeded with
random data from noise in the radio ADC".
But, again, we're not using that chip or an RF ADC.

Searching more on the web for entropy sources
I learn about quantum radioactive decay sources, cosmic RF noise
detectors, avalanche diodes, resistor noise, webcams on lava lamps and
all sorts of other fascinating ways to generate entropy. But, again,
none of these fit within our BoM (bill of materials).

I can really only find one entropy source design that I might be able to
do in the available hardware. This one involves using some free-running
oscillators to drive some counters or clock bits into an LFSR:

So I spend the afternoon putting an experimental implementation into my
CAD tool. It is fun and interesting project, but the prospects are
disappointing. For a few reasons, it's a no-go:

A. The software validation tool correctly identifies the free-running
oscillator as asynchronous logic and issues a big red warning about it.
Our process guidelines have a strong warning against using asynchronous

B. Our company also has a no-compiler-warnings rule. I could probably
argue for an exception, but this rule has served us well in the past.

C. The distribution of oscillator frequencies that will be seen in
actual parts can't be accurately predicted in advance. It's certain to
change if we change Si processes, or even batches of parts. I will be
required to define strict acceptance parameters for it, but I don't know
how that might affect yields.

D. Because of (C), my software can't estimate the power consumption of
the circuit unless I break this out into a whole separate project,
leaving a line on the Excel spreadsheet with just a WAG (wild ass guess)
for the power budget.

All of the above introduce risk and uncertainly. If it doesn't work
right and causes the need for another revision of the ASIC, it will add
a minimum of $100,000 and weeks to the schedule. Privately, I'm starting
to suspect that none of these quantum diode junction noise advocates
have ever actually designed a circuit for mass production.

Unclear as to what to do, I convey my findings to the team and the
product manager at the next design meeting, including my best effort at
explaining what "entropy" is and why it might be needed (leaving out the
part about the lava lamps).

Someone on the team mentions "Why yes, I recall reading something about
Intel announcing a crypto entropy source like that to be a headline
feature of the generation of processors".

The product manager asks "Well, what do other products in this price
range do?" and the decision easily turns on this question.

None of the other products in this space advertise a hardware entropy
source. The higher-end products that do even charge hundreds of dollars
for it as an optional feature. Our Marketing might like to have another
feature to talk about, but on the other hand they've never heard of it,
don't know what the demand is for it, and might not want to draw
attention to the fact that none of our other products have it either.

So our product ends up without a hardware entropy source.

Our web UI developers should just be able to deal with it like every
other product of this type. They settle on feeding the current time
obtained from the real-time clock into whenever it is accessed, just
like the sample code provided by the crypto library.

Our QA team even goes beyond the call of duty and tests the randomness
of the output to NIST SP 800-22 "A Statistical Test Suite for Random and
Pseudorandom Number Generators". Everything checks out fine.

By default the web UI is active whenever the device is powered on. If
there is no SSL certificate stored in the configuration, a self-signed
cert is generated automatically.

============= [Writing as Marsh again.]

The rest is history ... where 'history' equals some CVE-2012-XXXX issued
on 50,000+ factorable RSA keys.

Obviously this story is made up and probably not even fully consistent.
But having worked a little bit around hardware engineers it seems to me
like a very plausible scenario, if not typical.

> You are not alone using this perspective. NIST documents on secret
> random data generation are very confusing about the definition they
> use.

I'm thinking of it like this... if *I* had been in that product design
meeting, what could I have said to convey the real issue in concrete
terms that would have focused the attention where it needed to be in
order to avoid the mass vulnerability.

Here's what I can come up with:

"We're going to ship a ton of these devices, and well, they're firewalls
so most of them are going to live out on the internet for years without
other protection. Although we think of them as consumer devices, mostly
we don't know how they will be deployed. Many of them are going to come
under attack by organized attackers from the future who may even be able
to invest more time into analyzing the weak spots of our design than we can.

"Entropy in this case represents how much such an attacker doesn't know
about internal state of our device. For example, if we end up with only
32 bits of entropy in the secret key, the attacker will be able to guess
our secret key in only 2^31 guesses on average. He can even do these
guesses on his own computer and benefit from Moore's law some number of
years into the future. [of course this even under-states it, given the
GCD results]

"On the other hand, 32 bits of entropy is probably not something our QA
team would be able to distinguish from a properly functioning product.
The product will still appear to 'work great', but it will be insecure.
If we ship such a product, a security vulnerability will result. If
customers get hacked, or if they have to regenerate a large number of
widely-trusted keys, it will be a big big deal.

"So even if we can't include a hardware entropy source, we have to be
vigilant about the quality of the RNG used by our system as it
represents one of those silent 'gotcha's that QA won't be able to catch.

> (I dropped out of their feedback requests on the last revision/round
>  where they split the contents into two documents and released only
> one.) NIST seems to refer to three definitions: one from the
> information-theory (min-entropy), one where every bit is
> unpredictable (full entropy -- you know how NIST loves cryptographic
> parameters of just the proper size), and the risk analysis
> definition.

Hardware engineers are smart people, they can understand the math either
way you do it. What was missing in this scenario was a suitably
down-to-Earth definition of "entropy" that could be used make the case
for extra care and costs when it came down to design trade-offs.

The limitation I see with the "unpredictability" definition is that it
*appears* to imply an unrealistic scenario: as if the real concern were
that of an attacker literally being able to guess the output of the RNG
at some point in the future. Some attacks do work that way (e.g. CBC
mode IV prediction) but plenty of other important cases key generation
it amounts to putting an unrealistic constraint on the attacker.

Of course the "unpredictability" in this definition of entropy refers to
a theoretical model that is the basis of proofs and so we can't argue
with it in the company of mathematicians. But it translates very easily
into an intuitive, practical model - which is actually a wrong one.

> Anyway, this whole thing about RSA modulus GCD findings questions us
>  about entropy in a renewed perspective (a reminder that future
> attack vectors are deemed to be unexpected ones).

I think this is a very interesting question.

Does the GCD finding constitute a "future attack", or an "expected
attack", in such a way that it favors one working definition over the

- Marsh

More information about the cryptography mailing list