[cryptography] Duplicate primes in lots of RSA moduli

Ben Laurie ben at links.org
Sun Feb 19 12:57:37 EST 2012


On Sun, Feb 19, 2012 at 5:39 PM, Thierry Moreau
<thierry.moreau at connotech.com> wrote:
> Ben Laurie wrote:
>>
>> On Fri, Feb 17, 2012 at 8:39 PM, Thierry Moreau
>> <thierry.moreau at connotech.com> wrote:
>>>
>>> Ben Laurie wrote:
>>>>
>>>> On Fri, Feb 17, 2012 at 7:32 PM, Thierry Moreau
>>>> <thierry.moreau at connotech.com> wrote:
>>>>>
>>>>> Isn't /dev/urandom BY DEFINITION of limited true entropy?
>>>>
>>>>
>>>> $ ls -l /dev/urandom
>>>> lrwxr-xr-x  1 root  wheel  6 Nov 20 18:49 /dev/urandom -> random
>>>>
>>> The above is the specific instance on your environment. Mine is
>>> different:
>>> different kernel major/minor device numbers for /dev/urandom and
>>> /dev/random.
>>
>>
>> So? Your claim was "Isn't /dev/urandom BY DEFINITION of limited true
>> entropy?" My response is: "no".
>>
>>> I got the definition from
>>>
>>> man 4 random
>>>
>>> If your /dev/urandom never blocks the requesting task irrespective of the
>>> random bytes usage, then maybe your /dev/random is not as secure as it
>>> might
>>> be (unless you have an high speed entropy source, but what is "high
>>> speed"
>>> in this context?)
>>
>>
>> Oh, please. Once you have 256 bits of good entropy, that's all you need.
>>
>
> First, about the definition, from "man 4 random":
>
> <quote>
> A  read  from  the  /dev/urandom device will not block waiting for more
> entropy.  As a result, if  there  is  not  sufficient  entropy  in  the
> entropy  pool,  the  returned  values are theoretically vulnerable to a
> cryptographic attack on the algorithms used by the  driver.   Knowledge of
> how to do this is not available in the current non-classified literature,
> but it is theoretically possible that such an attack may exist.  If this is
> a concern in your application, use /dev/random instead.
> </quote>

That's what your man 4 random says, it's not what mine says.

> If the RSA modulus GCD findings is not a cryptographic attack, I don't know
> what is. (OK, it's not published as an attack on the *algorithm*, but please
> note the fact that /dev/urandom cryptographic weakness may be at stake
> according to other comments in the current discussion.)

I am not suggesting that the problems found are not caused by some
implementation of /dev/urandom. My point is simply that urandom is not
_defined_ to be weak.

> Second, about sufficiency of "256 bits of good entropy", the problem lies
> with "good entropy": it is not testable by software because entropy quality
> depends on the process by which truly random data is collected and the
> software can not assess its own environment (at least for the Linux kernel
> which is meant to be adapted/customized/built for highly diversified
> environment).
>
> Third, since "good entropy" turns out to become someone's confidence in the
> true random data collection process, you may well have your own confidence.
>
> In conclusion, I am personally concerned that some operational mishaps made
> some RSA keys generated with /dev/urandom in environments where I depend on
> RSA security.
>
> And yes, my concern is rooted in the /dev/urandom definition as quoted
> above.
>
> If I am wrong in this logical inference (i.e. the RSA modulus GCD findings
> could be traced to other "root cause" than limited entropy of /dev/urandom),
> then I admittedly have to revise my understanding.

As I have pointed out, some systems choose to make urandom the same as
random. Would they suffer from the same problem, given the same
environment? I think it would be useful to know.

In any case, I think the design of urandom in Linux is flawed and
should be fixed.



More information about the cryptography mailing list