[cryptography] philosophical question about strengths and attacks at impossible levels

Ian G iang at iang.org
Fri Oct 15 21:29:07 EDT 2010

Hi Steven and all,

On 16/10/10 1:56 AM, Steven Bellovin wrote:
> There are many possible answers to your query -- including, of course, "you're right" -- but maybe we should be a little bit more charitable.  Maybe, in fact, they're right.

I think one of the flaws in all this is the old

     "what's your threat model?"

question.  In this particular case, we know that NIST has explicitly 
(and by law) chosen a particular business model, which ordains the 
threat model [0].

That business model is that of USA government agencies.  Their threat 
model is that which is created by NSA, against USA government agencies. 
  For good or worse (*they* may be right about their analysis) that's 
something that does not really relate to us.

We should always keep in mind that NIST's business and threat models are 
not ours.

> The real goal is a certain degree of security -- an enemy cannot usefully attack it.  By "useful" I mean "in time to cause harm to someone".  Unfortunately, the cryptographic community -- at least the open sector community -- has no such metrics.  [snip]

On this I would demure.  We do have a good metric:  losses.  Risk 
management starts from the business, and then moves on to how losses are 
effecting that business, which informs our threat model.

We now have substantial measureable history of the results of open use 
of cryptography.  We can now substantially and safely predict the result 
of any of the familiar cryptographic components in widespread use, 
within the bounds of risk management.

The result of 15-20 years is that nobody has ever lost money because of 
a cryptographic failure, to a high degree of reliability.  Certainly 
within the bounds of any open and/or commercial risk management model, 
including orders of magnitude of headroom.

> At best, resistance can be demonstrated to certain classes of attacks.  Against unknown attacks -- or against attacks unknown in the open community -- not very much can be said. ...
> What if there is a new attack lurking, perhaps to be discovered (or released) 10 years from now?

Right, this can be said.  But risk management rules it out.  If it is 
unknown to us, we should not include it in the risk model, because we 
cannot measure any losses.  End of story.

One reason for this might be because we cannot disambiguate between good 
luck at avoiding the bogeyman and over-expenditure on something that 
isn't there, as Dan Geer said it [1].  Business and the open community 
work to benefit, we don't play the lottery for the fun of it.

Hence, the whole discussion about 512 bits, etc, is theoretically 
interesting, but it's not our discussion.

Where Zooko and others may be getting concerned is that NIST, having 
adopted the NSA's threat model, is now apparently pressuring or leading 
open and/or commercial organisations such as the browsers, the various 
IETF groups, the open community at large, etc, to follow.

But our threat model is demonstrably different.  And the result of 
following someone else's threat model is that we will make mistakes [2]. 
  In this case, we have a primae face case that adopting NIST's numbers 
will reduce our own security[3].  Because of the HTTPS everywhere 
concept [4].

So in a sense, maybe the Zooko thread can be seen at a political level, 
kick-back against NIST, to ask them to think more seriously about who 
and what and where they are leading.


[0] http://csrc.nist.gov/publications/PubsDrafts.html#SP-800-131

[1] http://financialcryptography.com/mt/archives/001255.html

[2] a comprehensive history of failed IETF committee security designs 
would be very interesting...

[3] http://financialcryptography.com/mt/archives/001286.html

[4] https://www.eff.org/https-everywhere/

More information about the cryptography mailing list