[Botan-devel] Key handling in Botan

Jack Lloyd lloyd at randombit.net
Tue Nov 17 17:12:51 EST 2009


On Tue, Nov 17, 2009 at 10:33:14PM +0100, Rickard Bellgrim wrote:

> I have a question regarding the key handling in Botan. I see that
> there are checks performed when creating the key object from
> existing key material. Does this gives any processing overhead?

Definitely. With strong key checking (checking all primes, keypair
consistency, etc) loading a 2048-bit RSA key from a file takes about
30 milliseconds (on my desktop). With most checking disabled (toggling
BOTAN_PRIVATE_KEY_STRONG_CHECKS_ON_LOAD in build.h) it goes to 5 ms.
Which is still perhaps substantial, since (after precomputations) a
RSA signature for a key that size takes about 5 ms on my machine.

> The next version of our software will have the key material
> encrypted. If the key is going to be used, then it is decrypted on
> the fly and used to create the key object. Once we have performed
> the signing operation, we drop the Botan private key object. Thus
> erasing the plaintext key material from the memory.
>
> In the current solution we have a key-cache storing these Botan key
> objects between the signing operations, which we would like to
> remove for version 2. Would it be possible to create the key objects
> with as little overhead as possible?

To be honest key loading has never been a performance problem in the
past so I'll have to ponder this problem for a while.

It's worth mentioning that repeatedly reloading the key will also
cause overhead in that the code is mostly written on the assumption
that an object will perhaps be used many times. For instance computing
the information for RSA-CRT, the Montgomery reductions and the fixed
window precomputations might well be a loss since they can't be
amortized over multiple operations. It would be theoretically
interesting to make the objects smart enough to do this themselves -
start out doing slow/simple operations and then precomputing as they
are used more often, but I'm not sure that's really a viable system
purely from a complexity perspective.

The approach that strikes me as most plausible at the moment is adding
a new form of serialization, parallel to the standard PKCS8 form, that
includes all the precomputed values (for Montgomery, etc) which could
then be encrypted and stored (and later decrypted and loaded directly
without any internal checking). I suspect there are at least 3-4 major
problems with this idea that I won't realize until I toy around with
an implementation, though.

On the application side - how much repeated use of a key do you expect
to see? I'm wondering if it might make sense to keep your current
cache, but record on each use the last use time, and then periodically
(say, ever 30 seconds) do a pass that zeros out any keys that have not
been used recently, which does have a longer attack window that
zeroing immediately after use, but would be nicely configurable to
trade off performance vs window of risk to attack.

> And another question. Are you using the RSA Chinese Remainder
> Theorem to gain any speed?

Yes RSA-CRT is used (and actually just today I added in a branch
support for using C++0x's threading operations to compute the two
halves of the CRT in parallel, which improves performance by 50 to 90%
depending on the key size (on a multicore processor, of course; not
much gain otherwise), which is rather nice).

-Jack



More information about the botan-devel mailing list