[cryptography] philosophical question about strengths and attacks at impossible levels

Ian G iang at iang.org
Fri Nov 19 18:39:03 EST 2010


On 20/11/10 6:26 AM, travis+ml-rbcryptography at subspacefield.org wrote:
> On Sat, Oct 16, 2010 at 12:29:07PM +1100, Ian G wrote:
>> On this I would demure.  We do have a good metric:  losses.  Risk
>> management starts from the business, and then moves on to how losses are
>> effecting that business, which informs our threat model.
>>
>> We now have substantial measureable history of the results of open use
>> of cryptography.  We can now substantially and safely predict the result
>> of any of the familiar cryptographic components in widespread use,
>> within the bounds of risk management.
>>
>> The result of 15-20 years is that nobody has ever lost money because of
>> a cryptographic failure, to a high degree of reliability.  Certainly
>> within the bounds of any open and/or commercial risk management model,
>> including orders of magnitude of headroom.
>
> Does the fact that parts of Stuxnet was signed by two valid certs
> count as a cryptographic failure?


Short answer:  no.

Medium answer:  if you look at the so-called Internet Threat Model [1] 
on which SSL was founded, the node was ruled outside the model [2]. 
Stolen valid certs are node problems not wire problems, and this is 
typically the assumption made in all certificate protocols.

Longer answer:  Depends on who is arguing, and what follows is my 
especial counter-cultural opinion.  I am widely disagreed :)

Typically, in promoting a technology, people will point at the 
cryptographic purity in a narrow fashion, and then market the protection 
delivered in a broader context.  This is called a bait & switch in the 
marketing world.

With certificate-based protocols, the protection was technically founded 
on cryptographic and protocol perfection [3].  But the marketing was 
founded on wider application-level claims of scary MITMs stealing your 
credit cards.  So they show us a canonical layered security model:

    1.  DES, MD5, AES, SHA1, etc.
    2.  the raw protocol.
    3.  the wider protocol informed by certificates.
    4.  The protocol + certificates backed by CAs, assurance.
    5.  The customary meanings encoded into libs & modules.
    6.  The application.
    7.  The businesses and minds of those exposed.

Like an onion, we can peel it and get down to the layer we want.

But, an attacker can also peel your onion.  So the attack is really 
against the complexity of SSL and family, as marketed and used.  Or, in 
the old aphorism of "a chain is only as strong as its weakest link" the 
strength of the links is wildly mismatched.

And this brings up the brittleness of the system:  it is far too 
complex, it is so complex that most people can't identify the disparity 
in strength between the different links of the chain.  Even industry 
insiders can't deal with the cognitive dissonance of connecting the 
titanic chain links of AES128 and SHA1 to the hemp rope of certificates 
and the daisies of user presentation.

The certificate family of protocols (broadly) is therefore likely always 
open to this sort of attack [4].

Hence, putting all this together we can suggest that the cryptography 
was strong enough from around 56-DES+512-RSA, because this family of 
protocols was likely employed in modes with higher ring attacks.  And 
the current crusade to move from 1024 to 2048 is a crowd-pleasing affair 
[5].

So, from a risk management perspective, we can comfortably predict that 
there will be a "low to very low" risk of cryptographic failure at any 
commonly deployed level [6];  this will remain the case until 
higher-order layers do a lot of work to make it more interesting for 
attackers to go deep.  From a business perspective, anyone who's 
spending time on the lower layers, at the expense of the higher layers, 
is negligent.

As I say, highly counter-culture and widely disagreed :)

iang



[1] http://www.iang.org/ssl/rescorla_1.html

[2] http://iang.org/ssl/wytm.html  Note that I'm assuming that at this 
level there is an equivalence between SSL and code-signing and all the 
other certificate usages, because they all more or less leant on the 
original security model for SSL.

[3] there have been only a few weaknesses (3?) since SSL v1, so as a 
protocol it has stood the test of time.  From a cryptographic pov only, 
and other family members haven't been tested as much.

[4] if one were to draw a lesson from this, it would be something like:
    Only end-to-end security is strong security;
    modules-based or layer-based security is inherently weak and brittle
    where the layers are not aligned together by one mind.
Which would be fine, if we could reliably ensure that the system were 
only deployed in medium-security installations.  Hence:
http://iang.org/ssl/h5_security_begins_at_the_application_and_ends_at_the_mind.html

[5] On the crusades, I highly recommend this book:
http://en.wikipedia.org/wiki/Extraordinary_Popular_Delusions_and_the_Madness_of_Crowds

[6] I'm assuming "open crypto" in all this.  I'm also excluding the 
academic hackers, as they are outside the loss metric.



More information about the cryptography mailing list