[cryptography] Duplicate primes in lots of RSA moduli

Adam Back adam at cypherspace.org
Sat Feb 18 07:24:59 EST 2012


I missed a favorite of mine that I've personally found multiple times
in deployed systems from small (10k users) to large (mil plus users),
without naming the guilty:

4. The RNG used was rand(3), and while there were multiple entropy
additions, they were fed using multiple invocations of srand(3).

Thats a double fail, at the least,:and commonly a triple or quadruple fail also:

a. rand3 internal state is tiny (effectively 32 bits the size of the seed);

b. they even misunderstood the srand(3) api, calling srand multiple
times resets the state fully each time, ie sometimes with less entropy
on subsequent calls;

c. commonly the entropy was weak and not measured anyway;

d. the entropy was added at random points during the security critical
phase, so that even if there was 128-bits it was released in
externally observable or testable ways in sub 32-bit lumps.

I guess the developers were uber-confident in their competence while
hacking that together :)  Dunning-Kruger effect at work!

Sometimes they may even have competent math cryptographers but who
cant program, dont look at code, and the developers never asked about
how to generate randomness.

I'm wondering if that is quite plausible for this case... the effect
would be rather like observed.

Adam

On 18 February 2012 10:40, Adam Back <adam at cypherspace.org> wrote:
> I also was pondering as to how the implementers could have arrived at
> this situation towards evaluating Stephen Farrell's draft idea to have
> a service that double checks at key gen time (that none of the p, q
> values are reused).  http://www.ietf.org/id/draft-farrell-kc-00.txt
>
> (Which I dont think is nearly robust enough for relying on, but doesnt
> hurt as a sanity check that will catch a small percentage of entropy
> offenders).
>
> So then what *could* have gone wrong?
>
> 1. (most generous) they had a rng that accounted for entropy
> estimation, waited until it had the target amount (eg 128-bits
> minimum) and then generated p, q BUT there was an overestimate of the
> entropy, it was MUCH worse than estimated
>
> 2. they didnt bother estimating entropy, the system clock was not set
> or it didnt have one and/or they didnt use it or used memory state
> after boot from rom or something predictable enough to show the
> collision rate.  (aka incompetence)
>
> 3. your idea -- maybe -- more incompetent things have happened :)
>
> Occam's razor suggests cryptographic incompetence.. number one reason
> deployed systems have crypto fails.  Who needs to hire crypto people,
> the developer can hack it together, how hard can it be etc.  There's a
> psychological theory of why this kind of thing happens in general -
> the Dunning-Kruger effect.  But maybe 1 happened.
>
> Adam
>
> [1] http://en.wikipedia.org/wiki/Dunning–Kruger_effect
>
> On 18 February 2012 07:57, Peter Gutmann <pgut001 at cs.auckland.ac.nz> wrote:
>> Adam Back <adam at cypherspace.org> writes:
>>
>>>Further the fact that the entropy seeding is so bad that some implementations
>>>are generating literally the same p value (but seemingly different q values)
>>>I would think you could view the fact that this can be detected and
>>>efficiently exploited via batch GCD as an indication of an even bigger
>>>problem.
>>
>> Do we know that this is accidental rather than deliberate?  A cute
>> "optimisation" for keygen would be to only randomly generate one half of the
>> {p,q} pair.  It's plenty of randomness after all, surely you don't really need
>> both to be generated randomly, only one will do, and it'll halve the keygen
>> time...
>>
>> Peter.



More information about the cryptography mailing list