[cryptography] urandom vs random

Sandy Harris sandyinchina at gmail.com
Sat Aug 17 12:39:30 EDT 2013


shawn wilson <ag4ve.us at gmail.com> wrote:

> I thought that decent crypto programs (openssh, openssl, tls suites)
> should read from random so they stay secure and don't start generating
> /insecure/ data when entropy runs low.

(Talking about Linux, the only system where I know the details)

urandom uses cryptographically strong mixing (SHA-1) and has
enormous state, so it should be secure barring pathological
cases like the router vendors whose version of Linux failed to
initialise things properly or an enemy who already has root on
your system so he/she can look at kernel internals. (and that
enemy has much better targets to go after).

Papers like Yarrow with respected authors argue convincingly
that systems with far smaller state can be secure.

> The only way I could see this
> as being a smart thing to do is if these programs also looked at how
> much entropy the kernel had and stopped when it got ~50 or so. Is this
> the way things are done when these programs use urandom or what?

That would make no sense since the interface provides another
way to get the effect. If you really need guaranteed entropy,
for example to generate a long-term key, then use /dev/random.
The driver then checks the entropy and blocks (makes your
program wait) if there is not enough.


More information about the cryptography mailing list