[cryptography] Kernel space vs userspace RNG

Russell Leidich pkejjy at gmail.com
Sat May 7 14:47:33 EDT 2016


Quite right, userspace cannot launch a DMA other than indirectly via OS
calls targetting various devices. (At least, let's hope not!)

Whatever its absolute value might be, the amount of entropy in the DMA
timing skew has to be higher in practice than that in interrupt timing. The
reason is that, for every interrupt, thousands or even billions of DMA
transactions occur. Although each individual DMA transaction timing is more
predictable than each individual interrupt timing, when taken in the
aggregate, DMA is much richer because of the ratio of their respective
average frequencies. Loosely speaking, this is because, while the timing
error in an event of period P is proportional to P^0.5, the error in an
event of period 0.5P is proportional to (0.5P)^0.5, or about 70% as much --
not 50% as much. So the ratio of (entropy provided by time between DMA
cycles, summed over all cycles) to (entropy provided by interrupt timing)
diverges as the number of DMAs per interrupt approaches infinity.

But can userspace see any of this via the timestamp counter? Here is the
proof:

~/enranda$ make timedeltaprofile
~/enranda$ temp/timedeltaprofile 0 0 30
+9.988416194207514E-01
~/enranda$ temp/timedeltaprofile 0 0 30
+9.295774360278206E-01

timedeltaprofile is a utility that comes with Enranda which measures
various statistical properties of the timedelta stream. (A "timedelta" is
simply (timestamp(N+1) minus timestamp(N)).) The first number above is the
dyspoissonism (one minus the distributional compression ratio, roughly) of
the timedelta stream at idle. As you can see, it was 99.8% compressible,
i.e. very predictable. The second number was measured while playing a video
and copying files in the background, none of which directly visible to
timedeltaprofile. Nonetheless, the dyspoissonism dropped to 92.9% in
reflection of higher detected timedelta entropy. Yes, some of this entropy
was pseudorandom, but some of it was also due to unknowable physical
parameters regarding those DMA processes, such as the arrival time of
network packets. The point is simply that userspace can detect this without
explicit access to the memory regions in question.

What userspace cannot determine is the extent to which the entropy is due
to unknowable physical parameters; the kernel has a clear advantage in this
regard. Maybe the kernel could sit in a tight loop digesting timedeltas
until some particular number of DMA transfers occurred from a device with
well characterized physical uncertainties. That would provide a lot more
entropy bandwidth than interrupt timing alone.


On Sat, May 7, 2016 at 3:45 PM, Krisztián Pintér <pinterkr at gmail.com> wrote:

>
>
>
> Russell Leidich (at Friday, May 6, 2016, 10:16:12 PM):
> > Most of the entropy in a system is manifest in terms of the clock
> > skew between direct memory access (DMA) transfers from external
> > devices and the CPU core clocks, which unfortunately does not
> > traverse the kernel in any directly observable manner.
>
> someone please confirm this, because i'm not a linux expert, but i
> don't believe user space code can do dma without the kernel knowing
> about it.
>
> also, i assert that such clock drifts provide much less entropy than
> you make it look like.
>
>
> > interrupt timing, unless we extend the definition of "interrupt" to
> > include quasiperiodic memory accesses from external clients.
>
> again, i'm no exert in low level kernel stuff, but to my knowledge,
> everything happens through interrupts, even dma uses it to report the
> end of an operation.
>
>
> _______________________________________________
> cryptography mailing list
> cryptography at randombit.net
> http://lists.randombit.net/mailman/listinfo/cryptography
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.randombit.net/pipermail/cryptography/attachments/20160507/1b04f1ce/attachment-0001.html>


More information about the cryptography mailing list