[Botan-devel] Hashing large files

Jim Dixon jdd at dixons.org
Sun Mar 5 07:41:53 EST 2006


On Sun, 5 Mar 2006, Benjamin Lau wrote:

> I need to be able to hash large files without loading the entire file
> into memory, as for certain inputs, the file is too big to fit into
> memory and I get a std::bad_alloc. Is there any other way to  hash
> data from files without loading all the data into memory?

You need to be able to read the file to hash it, but you certainly
don't need to load the entire file into memory at one time.  For
example, most SHA1 implementations support hashing chunks of the
file (multiples of 64 bytes in size) and then finishing off the hash
with a different call.

With Botan's SHA-160 you call
  add_data(const byte input[], u32bit length)
on each chunk and then finish the hash with
  final_result(byte output[])

--
Jim Dixon  jdd at dixons.org   tel +44 117 982 0786  mobile +44 797 373 7881



More information about the botan-devel mailing list