[cryptography] urandom vs random

coderman coderman at gmail.com
Mon Sep 9 01:35:02 EDT 2013


On Sun, Sep 8, 2013 at 9:57 PM, David Johnston <dj at deadhat.com> wrote:
> ...
> I've argued in private (and now here) that a large entropy pool is a natural
> response to entropy famine and uneven supply, just like a large grain depot
> guards against food shortages and uneven supply.

this is a good analogy :)


> ... The natural size for the state
> shrinks to the block size of the crypto function being used for entropy
> extraction

for best effective performance, it seems memory bus(es) constrains the
optimal transmission unit size.
4k extended instructions providing more throughput than repeated
instructions at 512bit chunks.

the worst case scenarios, you're passing entropy directly into AES
native instructions, and/or onward to PCIe lanes...


> This is one of the things that drove the design decisions in the RdRand
> DRNG. With 2.5Gbps of 95% entropic data, there is no value in stirring the
> data into a huge pool (E.G. like Linux)

you keep coming back to this assumption that RDRAND is entirely
trusted and always available.

consider adding additional entropy sources like USB keys, scavengers
like Dakarand or Haveged, and so forth.

conversely to your argument, there is no harm in aggressively mixing a
large pool with a high rate hardware entropy source. if you are one of
the worst case scenarios, like seeding an entire new volume for full
disk encryption with entropy, then you can manage accordingly and cut
out the OS level, kernel pool middle man, system call boundary, and
other overhead accordingly.


> A consequence of Linux having a big pool is that the stirring algorithm is
> expensive because it has to operate over a many bits.

but not effectively expensive!

again, i find very few the situations in which my modern processor is
unable to keep a properly refilled aggressively reseeded /dev/random
up to any demanded rate of consumption for high speed network
services, common client side uses, most key generation, and so forth.

if you are one of the worst case scenarios, like seeding an entire new
volume for full disk encryption with entropy, then you can manage
accordingly and cut out the OS level, kernel pool middle man, system
call boundary, and other overhead


> When I count my raw data in bits per second, rather than gigabits per
> second, I am of course going to use them efficiently and mix up a large pot
> of state, so I can get maximum utility. With the RdRand DRNG, the bus is the
> limiting factor, not the supply or the pool size.

fair enough, but consider the inverse, particularly for a skeptical
audience knowing what we do now:

why not mix aggressively with multiple sources if you have the CPU budget?

why not provide access to the raw, un-mixed, un-encrypted,
un-whitened, un-obfuscated state of the raw entropy bits for those so
inclined to use it in such a manner?


efforts to drive RDRAND into direct use instead of the kernel entropy
pool in the linux kernel,

efforts to steadfast refuse access to the raw entropy stream,

are thus viewed with elements of suspicion and provide an air of lack
of credibility.


even with all of these concerns, i have publicly said and will
continue to assert, using RDRAND is better than nothing. the current
state of entropy on most operating systems and especially virtual
machine environments on these operating systems, is very poor.

it is just a shame this resource cannot be used to greater utility and
confidence, as would be provided, were raw access be available.


best regards,


More information about the cryptography mailing list