[cryptography] caring harder requires solving once for the most demanding threat model, to the benefit of all lesser models

coderman coderman at gmail.com
Mon Oct 13 11:45:25 EDT 2014


On 10/13/14, ianG <iang at iang.org> wrote:
> ...
> your welcome ;-)

a considered and insightful response to my saber rattling diatribe.

i owe you a beer, sir!



> Ah well, there is another rule we should always bring remember:
>
>      Do not use known-crap crypto.
>
> Dual_EC_DRBG is an example of a crap RNG.  For which we have data going
> back to 2006 showing it is a bad design.

let's try another example: Intel RDRAND or RDSEED.  depend on it as
the sole source of entropy?

in theory, the only attacks that would allow to manipulate the output
are outside scope. (e.g. the data shows them as nation state level
hypothetical)

is "depending on a single entropy source" the "known-crap" part? or is
it the un-verifiable output of this specific source that is
"known-crap"?

(or am i overreaching, and you advocate direct and sole use of RDRAND
everywhere? :)



> Others in this category include:  RC4, DES, MD5, various wifi junk
> protocols, etc.

if RC4 is known-crap, then how is a downgrade to known-crap not a problem?



>> Q: 'Should I switch away from 1024 bit strength RSA keys?'
>
> I agree with that, and I'm on record for it in the print media.  I am
> not part of the NIST lemmings craze.
>
> So, assuming you think I'm crazy, let's postulate that the NSA has a box
> that can crunch a 1024 key in a day.  What's the risk?
> ...
> WYTM?  The world that is concerned about the NSA is terrified of open
> surveillance.  RSA1024 kills open surveillance dead.

consider a service provider that i use, like Google, with a
hypothetical 1024 bit RSA key to secure TLS. they don't use forward
secrecy, so recovery of their private key can recover content.

what is the risk that a Google-like provider key could be attacked? i
have no idea.  but certainly more than my risk as a single individual.

regarding open surveillance, this is a potential mechanism for it
despite the appearance of privacy.

at what point does an insufficient key length become "known-crap" vs.
needless lemming craziness?

said another way, "the data" is only useful if you or those you trust
is not an outlier.  in addition, "the data" is only retrospective; by
definition class breaks and novel attacks are not worth considering
until they become known and used.  does the difficulty in migrating
away from a new-known-crap mistake factor into how you draw the line?



> Actually, I thought there was data on this which shows that auto-update
> keeps devices more secure, suffer less problems.  I think Microsoft have
> published on this, anyone care to comment?

microsoft updates are not the standard upon which to measure all
application updates. the vast majority don't check certificates or
secure digests at all, hence the hundreds of vectors in evilgrade that
provide a seamless path from MitM at coffee shop to administrator on
your laptop.

is the "not using crypto" or "not using crypto right" parts the
"known-crap" piece of this equation?

is the MitM or DNS poison dependency "low risk" enough per the data
that the "known crap" of the update itself no longer matters?


---


thank you taking the time to address these points in depth so that i
can better understand your reasoning.

this is an interesting discussion because i arrived at the opposite
conclusion: given the reasonableness of long keys and secure designs,
and in view of ever improving attacks, the best course of action is to
solve _once_ for the hardest threat model, so that you don't rely on
past indicators to predict future security and all lesser threat
models can benefit from the protection provided.

i dream of a future where the sudden development of very many qubit
computers does not cause a panic to replace key infrastructure or
generate new keys. where the protocols have only one mode, and it is
secure. where applications don't need to be updated frequently for
security reasons. where entire classes of vulnerabilities don't exist.

in short, i dream of a future where the cooperative solution to the
most demanding threat models is pervasive, to the benefit of all
lesser models, now and into the future.


best regards,


P.S. part of the context for this bias is my perspective as developer
of fully decentralized systems. any peer in such a system is
potentially the highest profile target; the threat model for any peer
the most demanding threat model any one peer may operate under. the
usual "client vs. server", or "casual vs. professional" distinctions
in threat models no longer apply...


More information about the cryptography mailing list