[cryptography] Quality of HAVEGE algorithm for entropy?

Stephan Mueller smueller at chronox.de
Fri Nov 29 05:48:29 EST 2013


Am Freitag, 29. November 2013, 11:22:29 schrieb Joachim Strömbergson:

Hi Joachim,

> Aloha!
> 
> Stephan Mueller wrote:
> > I am doing a lot of research in this area these days. If you imply
> > that main storage means RAM outside the caches, I think your
> > statement is not entirely correct.
> 
> Yes, main store is RAM.
> 
> >> From the first rounds of testing, I think I see the following:
> [...]
> 
> What CPU is this? From your description it sounds to me like a x86,

Indeed: Intel Core i7 2nd gen.

> modern high performance CPU with instructions that has different cycle
> times and multiple levels of caches. On these types of CPUs, yes you
> don't need to hit main memory to get execution time variance.

Good, then we are on the same page
> 
> What I was trying to say is that Havege running on MCUs (AVR, AVR32,
> PIC, PIC32, ARM Cortex M0 etc) where instructions in general takes the
> same number of cycles to execute and where caches are few (few levels),
> have simple or even no replacement policy (it is done by SW control),
> the assumptions in Havege is not really present. And that this change in
> physical setup _should_ affect the variance measured. But again, I
> haven't tested it yet.

My RNG should runs there as well and I see variations without all this 
magic in HAVEGEd.
> 
> > - disabling the caches completely introduces massive variations
> 
> That is interesting. For a sequence of the same type of instructions?

Yes, if you even just call rdtsc twice immediately after each other and 
print out the delta, that delta fluctuates massively. My bare metal tester 
produces a histogramm with 25 slots. Typically all 25 slots are filled 
when creating such a delta for about 10,000 rounds.
> 
> > ==> My current hunch is that the differences in the clock speeds that
> > 
> >  drive the CPU versus the clock speed driving the memory locations
> > 
> > that you access (either for instruction or data fetches) are the key
> > driver of variations.
> 
> It's more of a clock descynchronization effect?

Not sure how you exactly define that term, but I think the core issue is 
the non-synchronized clocks for the CPU and the RAM.
> 
> > I do not concur here, because even IF the VM host does some RDTSC
> > emulation, the emulation code is subject to the fundamental jitter
> > problem outlined above. Hence, I do not think that the jitter can be
> > eliminated by virtualization.
> 
> I would say that Intel, Wind River would have a different opinion. It is

I would be very interested in such an approach -- how can you a-priori 
estimate which variations your code (that should remove the variations) 
will have? Note, any code itself will produce variations. Thus, when you 
remove the variations from the base system, you will introduce variations 
with that code.

> in fact one of the things you can control. You can lock the RDTSC to
> provide all kinds of sequences in related to the CPU or clock or
> otherwise. This is actually what is being used to protect against timing
> side channel attacks between VMs, processes.
> 
> > [1] http://www.chronox.de
> 
> Very cool. How does [1] compare functionally to jytter?
> http://jytter.blogspot.se/

I have to check.
> 
> Side note, on [1] you state that it is "a non-physical true random
> number generator". I would say that it is a physical RNG. It measures
> physical events. But it does not measure events _outside_ the CPU.

That term is coined by the German BSI I sometimes have to work with. Maybe  
it is not fully right. But BSI thinks a physical RNG is a pure physical 
implementation. If you have software attached, it is called non-physical.

Ciao
Stephan
-- 
| Cui bono? |


More information about the cryptography mailing list