[cryptography] philosophical question about strengths and attacks at impossible levels
marsh at extendedsubset.com
Wed Nov 24 17:16:44 EST 2010
On 11/24/2010 02:11 PM, coderman wrote:
> On Wed, Nov 24, 2010 at 2:49 AM, Marsh Ray<marsh at extendedsubset.com> wrote:
> (that's the abridged version. this is actually more complicated than
> many assume, and i've written my own egd's in the past to meet need.)
>> How does this feature interact with virtualization?
> for virtual guests you have a different type of egd that communicates
> with host egd / host entropy pool to feed guest pool in similar
> manner. you typically can't use these entropy sources in both host
> and guest concurrently.
So are you saying it is or it isn't Cloud-Compliant?
Quick! Get a patent on gathering entropy "in a cloud computing
>> How hard is it to define such a thing in standard chip design tools? I
>> imagine many tools will complain loudly about nondeterministic states.
> it can be as simple as a pair of fast, free-wheeling oscillators
> sampled by a slower one.
What frequency are these oscillators? Does it change with voltage?
Temperature? External RF sources? Other (possibly malicious) activity on
the chip? How much does it vary with manufacturing process or across
individual samples? Too much? Too little?
Can they be measured externally? Why not?
What's the GCD of their frequencies?
Can they interact (e.g., over the power bus)? What prevents them from
drifting a bit and synchronizing to a nearby fixed ratio?
>> What sources of entropy are available to the chip
>> designer that are not also available to a software EGD?
> physical processes :)
Many chips have some A/D inputs, some have thermometers, etc. Most all
have some external hardware interrupts and reasonably-fast clocked
internal counters. Given all that, it's hard to explain how cosmic radio
noise is more of a "physical process" than the timing of network packets.
>> How many customers would choose your chip instead of the other brand because
>> of this? Is it worth the risk inherent in any new feature?
> you shouldn't have to choose. if every core had well designed hw
> entropy sources it would be a given. the lack of this is the source
> of my lamentation...
I think hardware engineers these days are used to modular CPUs (e.g.
ARM) where they can throw out whatever they don't need. If the
application designer doesn't see a need for it, it'll get thrown out. If
app designers did commonly want hw entropy, it would be available in
more chips today.
>> How do you market it? How do you keep it from being marketed as something
>> that it isn't?
> no idea. i have yet to find effective responses to combat the
> exagnorance from marketing incurred when technical meets prostitution.
> if you solve this let me know *grin*
Make the feature sound really unappealing?
>> If it turned out to be weak, would you have to recall the chips? How about
>> products containing it?
>> This sucker got baked into a lot of smart meters, or so I hear:
> yup. that's one example of how not to do entropy!
> (sadly, there are many more examples out there... :(
>> Of course, the answer may still be that it's better to have an instruction
>> for it than not. But the advantages are subtle and hard to quantify, whereas
>> the costs, complexity, and risks of adding it are measurable.
> agreed. i still think this is better to have than not, particularly
> for headless server configuration and plentiful key generation
> requirements. however, all of your concerns are valid and it is indeed
> a tricky endeavor to do correctly.
You have clearly thought about this a lot and have good answers. That
was really the point of my questions: even if it's easy to get random
behavior from silicon, it's still a nontrivial engineering project that
must compete with other projects for scarce development resources.
In the end it's hard to convince the unconverted that you have something
meaningfully better than what you could get from a pure software
approach (interrupt timing, etc).
It seems like the main selling point for an entropy pool in dedicated
silicon is that you might be able to retain some entropy across reboots
(in flash or capacitor-backed ram) without exposing it to external
observation. The feature now becomes "shorter wakeup time" (which
everybody can relate to) rather than "more unpredictable numbers" (when
most people are satisfied with the current methods).
Crypto enthusiasts seem to have a particular fascination with entropy
gathering an PRNGs for some reason. Perhaps that's because it appears to
be a relatively easy thing to get experiment with, and quite practical
to make something more or less impossible to break. Most of the time we
spend our efforts trying to eliminate the effects of entropy in our
systems, it's fun to think about the opposite for a change.
More information about the cryptography