[cryptography] Intel RNG

Jon Callas jon at callas.org
Mon Jun 18 16:21:20 EDT 2012


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1


On Jun 18, 2012, at 11:15 AM, Jack Lloyd wrote:

> On Mon, Jun 18, 2012 at 10:20:35AM -0700, Jon Callas wrote:
>> On Jun 18, 2012, at 5:26 AM, Matthew Green wrote:
>> 
>>> The fact that something occurs routinely doesn't actually make it a good idea. I've seen stuff in FIPS 140 evaluations that makes my skin crawl. 
>>> 
>>> This is CRI, so I'm fairly confident nobody is cutting corners. But that doesn't mean the practice is a good one. 
>> 
>> I don't understand.
>> 
>> A company makes a cryptographic widget that is inherently hard to
>> test or validate. They hire a respected outside firm to do a
>> review. What's wrong with that? I recommend that everyone do
>> that.
> 
> When the vendor of the product is paying for the review, _especially_
> when the main point of the review is that it be publicly released, the
> incentives are all pointed away from looking too hard at the
> product. The vendor wants a good review to tout, and the reviewer
> wants to get paid (and wants repeat business).

Not precisely.

Reviewers don't want a review published that shows they gave a pass on a crap system. Producing a crap product hurts business more than any thing in the world. Reviews are products. If a professional organization gives a pass on something that turned out to be bad, it can (and has) destroyed the organization.

The reviewer is actually in a win-win situation. No matter what the result is, they win. But ironically, or perhaps perversely, a bad review is better for them than a good review. The reviewer gains far more from a bad review.

Any positive review is not only lacking in the titillation that comes from slagging something, but you can't prove something is secure. When you give a good review, you lay the groundwork for the next people to come along and find something you missed -- and I guarantee it, you missed something. There's no system in the world with zero bugs.

Of course there are perverse incentives in reviews. That's why when you read *any* review, you have to have your brain turned on and see past the marketing hype and get to the substance. Ignore the sizzle, look at the steak.

> 
> I have seen cases where a FIPS 140 review found serious issues, and
> when informed the vendor kicked and screamed and threatened to take
> their business elsewhere if the problem did not 'go away'. In the
> cases I am aware of, the vendor was told to suck it and fix their
> product, but I would not be so certain that there haven't been at
> least a few cases where the reviewer decided to let something slide. I
> would also imagine in some of these cases the reviewer lost business
> when the vendor moved to a more compliant (or simply less careful)
> FIPS evaluator for future reviews.

I agree with you completely, but that's somewhere between irrelevant and a straw man.

FIPS 140 is exasperating because of the way it is bi-modal in many, many things. NIST themselves are cranky about calling it a "validation" as opposed to a "certification" because they recognize such problems themselves.

However, this paper is not a FIPS 140 evaluation. Anything one can say positive or negative about FIPS 140 is at best tangential to this paper. I just searched the paper for the string "FIPS" and there are six occurrences of that word in the paper. One reference discusses how a bum RNG can blow up DSA/ECDSA (FIPS 186). The other five are in this paragraph:

    In additional to the operational modes, the RNG supports a FIPS
    mode, which can be enabled and disabled independently of the
    operational modes. FIPS mode sets additional restrictions on how
    the RNG operates and can be configured, and is intended to
    facilitate FIPS-140 certification. In first generation parts, FIPS
    mode and the XOR circuit will be disabled. Later parts will have
    FIPS mode enabled. CRI does not believe that these differences in
    configuration materially impact the security of the RNG. (See
    Section 3.2.2 for details.)

So while we can have a bitch-fest about FIPS-140 (and I have, can, do, and will bitch about it), it's orthogonal to the discussion.

It appears that you're suggesting the syllogism:

FIPS 140 demonstrate security well.
This RNG has FIPS 140
Therefore, this RNG is not secure.

Or perhaps a conclusion of "Therefore, this paper does not demonstrate the security of the RNG" which is less provocative.

What they're actually saying is that they don't think that FIPSing the RNG will "materially impact the security of the RNG" -- which if you think about it, is pretty faint praise.


> 
> I am not in any way suggesting that CRI would hide weaknesses or
> perform a lame review.

But that is *precisely* what you are saying.

Jon Stewart could parody that argument far better than I can. You're not saying that CRI would hide things, you're just saying that accepting payment sets the incentives all the wrong way and that all companies would put out shoddy work so long as they got paid, especially if giving a bad review would make the customer mad.

Come on. If you believe that this report is not worth the bits its written on because it was done for-pay, at least say so. If you think that they guys who put their names on the paper have prostituted their reputations, have the courage to say so.

> However the incentives of the relationship do
> not favor a strong review, and thus the only reason I would place
> credence with it is my impression of the professionalism of the CRI
> staff. In contrast, consider a review by, say, a team of good grad
> students, where the incentive is very strongly to produce a
> publishable result and only mildly on making the vendor happy. Those
> incentives again are not perfect (what is), especially given how
> academic publishing works, but they are somewhat more aligned with the
> end users desire to have a product that is secure.

In other words, only grad students are qualified to make an independent review, and universities are not tainted by money.

I think sharp grad students are among the best reviewers possible. I think they do fantastic work. But there isn't a single paper from them that I've ever seen that didn't essentially stop abruptly because the funding ran out, or time ran out, or they decided to do something else like graduate.

*All* reviews are limited by scope that is controlled by resources. *All* reviews have a set of perverse incentives around them. The perverse incentives of professional review are indeed out of phase with the perverse incentives of academics. If you're a grad student and you review something and it turns out it's pretty good, you'll likely get told by your advisor that your good review demonstrates that you're a nitwit, and be told to go find some real problems.

I will also note that academic reviews are not immune to marketing hype. In my experience, the amount of hype is inversely proportional to the professionalism of the reviews (whatever the heck professionalism means). Academic reviews and things at hacker cons are high-hype. As someone who produces a hacker con, I can tell you that we do not usually tell people that they should lower the hype quotient of their talks before they present.

I have, however, expressed the opinion that if a well-known security expert was listed on a program with the session "Soandso will read from the telephone book" that that might be the ultimate way to fill a room.

> 
>> Un-reviewed crypto is a bane.
> 
> Bad crypto with a rubber stamp review is perhaps worse because someone
> might believe the stamp means something.

So we shouldn't bother to get reviews, because they're just rubber stamps?

Here's the way I look at it:

* Intel has produced a hardware RNG in the CPU. It's shipping on new Ivy Bridge CPUs. 

* Intel would like to convince us that it's worth using. There are many reasons for this, not the least of which being that if we're not going to use the RNG, people would like that chip real estate for something else.

* We actually *need* a good hardware RNG in CPUs. I seem to remember a kerfuffle about key generation cause by low entropy on system startup in the past. This could help mitigate such a problem.

* Intel went to some experts and hired them to do a review. They went to arguably the best people in the world to do it, given their expertise in hardware in general and differential side-channel cryptanalysis.

* Obviously, those experts found problems. Obviously, those problems got fixed. Obviously, it's a disappointment that we don't get to hear about them.

* The resulting report is mutually acceptable to both parties. It behooves the reader to peer between the lines and intuit what's not being said. I know there are issues that didn't get fixed. I know that there are architectural changes that the reviewers suggested that will come into later revisions of the hardware. There always are.

* The report answers a lot of questions I had about the hardware. It was very informative. However, it also raised a number of questions, and confirmed some minor disappointments that I had when I merely speculated mentally about the RNG.

* All in all, though, I'm happy to take this RNG and put it along with everything else that should be in a good RNG system like /dev/random. I would be happy to add this into my entropy gathering and pool management. It materially improves the state of the art. I would even go so far as to say that if /dev/urandom were to include this XORed on top of whatever else it was doing, I'd be very, happy indeed.

* I think that Intel should get praise for building this RNG, commissioning the report and releasing it. They shouldn't be castigated for it. We should encourage them and others to do more of this.

* If we want to have a discussion of the SP 800-90+ DRBGs, that's also a great discussion. Having implemented myself an AES-CTR DRBG (which is what this is), I have some pointed opinions. That discussion is more than tangential to this, but it's also a digression, unless we want to attack the entire category of AES-CTR DRBGs (which is an interesting discussion in itself). 

In conclusion:

Should a smart person furrow their brow and peer jaundicedly at a professional report. Of course! Duh! But should do that to all reports, professional or academic or amateur. They should ask themselves why they should believe it, and how much of it to believe. They should look at the context within which the report was created. There's always bias.

But to suggest that what we need is fewer analyses of security systems hurts security. It makes the world a worse place. To suggest that professionals are inherently corrupt is insulting to everyone in this business and in my opinion says far more about the people who suggest it than the professionals. To suggest that academia is somehow free of bias shows a blind spot. To go further and suggest that only academia has pure motives shows how big academia's blind spots are. 

Commercial organizations are irrationally afraid of getting reviews. They know it's silly, but they can't help it. They need help and encouragement to strive for quality. If you push them back into not evaluating their own stuff, if you make it be something where evaluating their own security will get them castigated, then you're hurting security. Stop it.

We need more flowers blooming. If one flower or another stinks, sure. There are plenty of stinky flowers. Call them out. But flowers do not stink because they were planted by a gardener. They stink because they stink. We need more flowers, and if the need for flowers lets people have jobs as gardeners, that's good. 

	Jon



-----BEGIN PGP SIGNATURE-----
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFP343HsTedWZOD3gYRAmp6AKDu8i6s9hGBsF9vq30L4gwWUsqunACeID/v
rzGDFBu6wA57lMacC22s268=
=OoDY
-----END PGP SIGNATURE-----



More information about the cryptography mailing list