[cryptography] caring harder requires solving once for the most demanding threat model, to the benefit of all lesser models

ianG iang at iang.org
Wed Oct 15 07:13:41 EDT 2014

On 13/10/2014 16:45 pm, coderman wrote:
> On 10/13/14, ianG <iang at iang.org> wrote:
>> ...
>> your welcome ;-)
> a considered and insightful response to my saber rattling diatribe.
> i owe you a beer, sir!

I'm honoured!

>> Ah well, there is another rule we should always bring remember:
>>      Do not use known-crap crypto.
>> Dual_EC_DRBG is an example of a crap RNG.  For which we have data going
>> back to 2006 showing it is a bad design.
> let's try another example: Intel RDRAND or RDSEED.  depend on it as
> the sole source of entropy?

According to what I consider good secure practices [0] relying on one
(platform) source for random numbers is probably best unless you really
truly have a good reason not to.

We have no data that suggests the design is bad.  Actually all the data
suggests the design is good!  What we have is an unfortunate learning
exercise in being too good:  whitening in hardware also hides backdoors.
 Is that good enough a reason?  Not my call, at the moment.  But when
data turns up, such as the attacks on Dual_EC_DRGB, we'll have something
to chew on.

> in theory, the only attacks that would allow to manipulate the output
> are outside scope. (e.g. the data shows them as nation state level
> hypothetical)
> is "depending on a single entropy source" the "known-crap" part?

I say no.

> or is
> it the un-verifiable output of this specific source that is
> "known-crap"?

Ah, yes, this is a question.  I'd say it isn't "known-crap" again
because there is substantial pressure on the platform provider to not
ever get caught providing known-crap.

Which makes it a very high value target ;) so there are limits to
assumptions here.  One wonders if they will fix that in future releases...

> (or am i overreaching, and you advocate direct and sole use of RDRAND
> everywhere? :)

:) em, close, I advocate direct and sole use of your platform's RNG.
Rule #1:


1. Use what your platform provides. Random numbers are hard, which is
the first thing you have to remember, and always come back to. Random
numbers are so hard, that you have to care a lot before you get
involved. A hell of a lot. Which leads us to the following rules of
thumb for RNG production.

    a. Use what your platform provides.
    b. Unless you really really care a lot, in which case, you have to
write your own RNG.
    c. There isn't a lot of middle ground.
    d. So much so that for almost all purposes, and almost all users,
Rule #1 is this: Use what your platform provides. E.g., for *nix, use
urandom [Ptacek].
    e. When deciding to breach Rule #1, you need a compelling argument
that your RNG delivers better results than the platform's [Gutmann1].
Without that compelling argument, your results are likely to be more
random than the platform's system in every sense except the quality of
the numbers.

>> Others in this category include:  RC4, DES, MD5, various wifi junk
>> protocols, etc.
> if RC4 is known-crap, then how is a downgrade to known-crap not a problem?

It is.  For my money, a downgrade is always known-crap.  A downgrade to
known-crap is beyond embarrassing, it's humiliating.  It is a mark of
architectural failure, it means that you knew there was known-crap yet
you did nothing.

Oh, and today's news.  SSL should have been deprecated ages ago.  Why
wasn't it?  Just embarrassing;  those who wax ladonical about the need
to support 350 algorithm suites and versions back to the beginning of
time have no, zip, nada thought let alone solution on deprecation.

>>> Q: 'Should I switch away from 1024 bit strength RSA keys?'
>> I agree with that, and I'm on record for it in the print media.  I am
>> not part of the NIST lemmings craze.
>> So, assuming you think I'm crazy, let's postulate that the NSA has a box
>> that can crunch a 1024 key in a day.  What's the risk?
>> ...
>> WYTM?  The world that is concerned about the NSA is terrified of open
>> surveillance.  RSA1024 kills open surveillance dead.
> consider a service provider that i use, like Google, with a
> hypothetical 1024 bit RSA key to secure TLS. they don't use forward
> secrecy, so recovery of their private key can recover content.

If google were to ask me 'Is 1024 bit broken' I would say no.  'Should I
switch away from 1024 bit strength RSA keys?' then sure, do that, in
time, but don't be overly panicked about it.

(This is a trick question of course, you've shifted the goalposts.  So I
shifted the strike... google is reputed to have security people and
never ever asks anyone else what to do.  On the other hand, your average
online bank is something that practices 'best practices' and needs to be
told what to do.)

> what is the risk that a Google-like provider key could be attacked? i
> have no idea.  but certainly more than my risk as a single individual.

Indeed.  In fact, we know they were already attacked, and breached.  If
they'd asked me on that one I'd have said, yeah, probably best to have
1024 bit RSA rather than nothing ;)

> regarding open surveillance, this is a potential mechanism for it
> despite the appearance of privacy.
> at what point does an insufficient key length become "known-crap" vs.
> needless lemming craziness?

This is a very good question but I believe the wrong one.

The problem isn't the keylength, it is the lack of upgradeability in
modern systems.  As we saw today, SSL 3.0 is now this month's pariah.
Yet SSL 3.0 was late 1990s, there are now kids in the industry who
weren't born when SSL 3.0 was designed.

So, why is it still there?  If we had an answer to that, we'd have the
answer to the "known-crap" keylength problem as well.  Focus on the real
problem:  how do we upgrade *everything* to something written in the
last 7 years?

> said another way, "the data" is only useful if you or those you trust
> is not an outlier.

Yes, I'd agree.  Data is useful in absence of some other evidence, but
we agree it is statistical and only an approximation that suits the

If you're an outlier, you've got outlying problems.  But if you're an
outlier, your problems should not be a burden on others.  This is why
NIST is wrong to force everyone to upgrade to 2048, and it is unbalanced
to mix general advice with what google do.

> in addition, "the data" is only retrospective; by
> definition class breaks and novel attacks are not worth considering
> until they become known and used.

Yup, this is a cost, some have to take it for the team;  balanced
against the fact that a priori FUD is also difficult to analyse and
incurs costs without benefit.

So, for example, Heartbleed.  Everyone upgraded, $500m ka-ching, thank
you very much.  (OK, I have my doubts about that number too, but it's a

Now, we have very little evidence of any breaches using Heartbleed.
What is it, one?  CRA?  And that one was caught.  There was another

Hypothetically what would have happened if everyone waited a month and
saved say half the cost by doing it leisurely?  $250m saved!

Would losses -- theft -- have risen from $0 known to ... $1m ?

We're still in profit by $249m.

Let's wait another month?  Could we wait a year?

> does the difficulty in migrating
> away from a new-known-crap mistake factor into how you draw the line?

Very much so.  Known-crap is however a warning that we should migrate.
As and when permitted.  Maybe this month, maybe this year.

*But we should definitely upgrade.*

What happens with NIST is that they set a very aggressive timetable that
forced lots of rework on CAs.  Now, granted those CAs are lazy bastards
who don't deserve the oxygen, but why can't they upgrade in due course?
 Why did they get forced to upgrade RSA1024 when it's never shown a
problem, yet nobody did anything about MD5 until it was cracked?

Answer:  everyone was asleep at the wheel.  NIST received some secret
push from NSA, then panicked.  Now everyone's caffeineted and drugged to
the eyeballs at the wheel.  One blink and everyone's in a pile up.

Remember that CA that was crunched with MD5?  RabbitSSL or something,
well, they were only a month away from hopping onto a new root.  They
had a plan.  Just got caught, on-the-hop, almost.

>> Actually, I thought there was data on this which shows that auto-update
>> keeps devices more secure, suffer less problems.  I think Microsoft have
>> published on this, anyone care to comment?
> microsoft updates are not the standard upon which to measure all
> application updates. the vast majority don't check certificates or
> secure digests at all, hence the hundreds of vectors in evilgrade that
> provide a seamless path from MitM at coffee shop to administrator on
> your laptop.

I *wish* there was data on that claim!!

> is the "not using crypto" or "not using crypto right" parts the
> "known-crap" piece of this equation?

Yeah.  This is way more known-crap than say 1024bit RSA.

> is the MitM or DNS poison dependency "low risk" enough per the data
> that the "known crap" of the update itself no longer matters?

I believe so.  One day that might change.  But for now... download stuff
off the net using HTTP, it just works.  Probably because of the element
of surprise, the difficulty of interception for ordinary attackers, and
the fact that the stuff we download is probably 1000 times more
dangerous than any MITM or similar.

> thank you taking the time to address these points in depth so that i
> can better understand your reasoning.
> this is an interesting discussion because i arrived at the opposite
> conclusion: given the reasonableness of long keys and secure designs,
> and in view of ever improving attacks, the best course of action is to
> solve _once_ for the hardest threat model, so that you don't rely on
> past indicators to predict future security and all lesser threat
> models can benefit from the protection provided.

I agree.  If that is the topic, let's solve it once.  If you're
interested in other ruminations, try this one [1].

But for legacy stuff not necessarily so.

> i dream of a future where the sudden development of very many qubit
> computers does not cause a panic to replace key infrastructure or
> generate new keys.

Ahhh... that is tough one.

> where the protocols have only one mode, and it is
> secure.

That I agree with.

> where applications don't need to be updated frequently for
> security reasons. where entire classes of vulnerabilities don't exist.

That I wish I agreed with.  It just doesn't seem to be the case.  Look
at SSL -- widely believed to be strong in the 1990s, is now the
embarrassment of the 2010s.  We know so much more as time goes on.

(Note the contradiction...)

> in short, i dream of a future where the cooperative solution to the
> most demanding threat models is pervasive, to the benefit of all
> lesser models, now and into the future.
> best regards,
> P.S. part of the context for this bias is my perspective as developer
> of fully decentralized systems. any peer in such a system is
> potentially the highest profile target; the threat model for any peer
> the most demanding threat model any one peer may operate under. the
> usual "client vs. server", or "casual vs. professional" distinctions
> in threat models no longer apply...

Right.  And, the upgrade now model also disappears because you can't
change the network rules without breaking the consensus model.  Oops.

So, in such a domain, what matters?  To take our favourite whipping
horse Bitcoin, it has a great solution for double spending, a dynamic
membership signature over the block, but its approach at an
institutional level to platform security can only be deemed a sick joke.

So, should we work on building bigger and better blockchains?  (I would
like to, as I personally find the PoW solution abominable except in winter.)

Or, should we work on platform security first and try and keep more of
the coins in the pockets of more of the people?

The neat CS answers-on-paper don't necessarily survive when the people
get involved...



More information about the cryptography mailing list