[cryptography] Intel RNG

Marsh Ray marsh at extendedsubset.com
Tue Jun 19 00:46:01 EDT 2012


On 06/18/2012 10:21 PM, ianG wrote:
>
> The first part is that AES and block algorithms can be quite tightly
> defined with a tight specification, and we can distribute test
> parameters. Anyone who's ever coded these things up knows that the test
> parameters do a near-perfect job in locking implementations down.

Yes, this is what separates the engineers from the hobbyists.

For example, this is my main complaint about bcrypt and scrypt. The 
absence of a comprehensive set of test vectors has caused insecure and 
incompatible implementations to creep into existence.

> This results in the creation of a black-box or component approach.
> Because of this and perhaps only because of this, block algorithms and
> hashes have become the staples of crypto work. Public key crypto and
> HMACs less so. Anything crazier isn't worth discussing.

I don't get it. Why can't we have effective test vectors for HMACs and 
public key algorithms?

> Then there are RNGs. They start from a theoretical absurdity that we
> cannot predict their output, which leads to an apparent impossibility of
> black-boxing.
>
> NIST recently switched gears and decided to push the case for
> deterministic PRNGs. According to original thinking, a perfect RNG was
> perfectly untestable. Where as a perfectly deterministic RNG was also
> perfectly predictable. This was a battle of two not-goods.
>
> Hence the second epiphany: NIST were apparently reasoning that the
> testability of the deterministic PRNG was the lesser of the two evils.

But it's not even a binary choice. We can divide the system into 
components and test with the best available methods for each components 
separately.

Even the most deterministic crypto implementation will have some squishy 
properties. E.g., power and EM side channels, resilience to fault 
injection. The engineering goal is to make the proportion of the system 
that can be certified rigorously as large as possible and the proportion 
that can't be tested well at all as small as possible.

> They wanted to black-box the PRNG, because black-boxing was the critical
> determinant of success.
>
> After a lot of thinking about the way the real world works, I think they
> have it right.

It's almost as if they know a thing or two about testing stuff. :-)

> Use a deterministic PRNG, and leave the problem of
> securing good seed material to the user. The latter is untestable
> anyway, so the right approach is to shrink the problem and punt it
> up-stack.
>
> Taking that back to Intel's efforts. Unfortunately it's hard to do that
> deterministic/seed breakup in silicon. What else do they have?

One thing they could do is provide a mechainsm to access raw samples 
from the Entropy Source component. I.e., the data that "Intel provided 
[to Cryptography Research] from pre-production chips. These chips allow 
access to the raw ES output, a capability which is disabled in 
production chips."

Obviously these samples can't go back into the DRBG, but some developers 
would probably like to estimate the entropy in the raw data. They would 
likely interpret it as a higher quality source if they could reach that 
conclusion with their own code.

ISTR OpenBSD performs this type of analysis when feeding timing samples 
into the pool.

> The components / black-boxing approach in cryptoplumbing has been ultra
> successful. It has also had a rather dramatic effect on everything else,
> because it has raised expectations. We want everything else to be as
> "perfect" as the block encryption algorithm.

And I want a pony!

> Unfortunately, that's not possible. We need to manage our expectations.

I think it's entirely reasonable for us to insist on an efficient high 
quality source of cryptographically secure random numbers. After all 
this is the basis of so many other important security properties that we 
should be grateful for all the powerful simplifying assumptions it 
enables. Remember the old days when we had to bang on the keyboard and 
move the mouse in order to generate keys?

What we need to get better at, IMHO, is incorporating experience into 
our threat model and judging relative risks. For example, a lot of stuff 
going on in kernel RNGs looks to me like additional complexity that's 
more likely to breed bugs than to defend against an imagined 
three-letter agency with space alien supercomputers.

- Marsh



More information about the cryptography mailing list