[cryptography] Key Checksums (BATON, et al)

Steven Bellovin smb at cs.columbia.edu
Thu Mar 28 17:13:44 EDT 2013


On Mar 28, 2013, at 4:21 PM, ianG <iang at iang.org> wrote:

> On 27/03/13 22:13 PM, Ben Laurie wrote:
>> On 27 March 2013 17:20, Steven Bellovin <smb at cs.columbia.edu> wrote:
>>> On Mar 27, 2013, at 3:50 AM, Jeffrey Walton <noloader at gmail.com> wrote:
>>> 
>>>> What is the reason for checksumming symmetric keys in ciphers like BATON?
>>>> 
>>>> Are symmetric keys distributed with the checksum acting as a
>>>> authentication tag? Are symmetric keys pre-tested for resilience
>>>> against, for example, chosen ciphertext and related key attacks?
>>>> 
>>> The parity bits in DES were explicitly intended to guard against
>>> ordinary transmission and memory errors.
> 
> 
> Correct me if I'm wrong, but the parity bits in DES guard the key, which doesn't need correcting?  And the block which does need correcting has no space for parity bits?
> 
If a block is garbled in transmission, you either accept it (look at all the
verbiage on error propagation properties of different block cipher modes)
or retransmit at a higher layer.  If a key is garbled, you lose everything.

Error detection in communications is a very old idea; I can show you telegraph
examples from the 1910s involving technical mechanisms, and the realization
that this was a potential problem goes back further than that, at least as
early as the 1870s, when telegraph companies offered a "transmit back" facility
to let the sender ensure that the message received at the far end was the one
intended to be sent.

The mental model for DES was computer->crypto box->{phone,leased} line,
or sometimes {phone,leased} line->crypto box->{phone, leased} line.  Much
of it was aimed at asynchronous (generally) teletype links (hence CFB-8),
bisync (https://en.wikipedia.org/wiki/Bisync) using CBC, or (just introduced
around the time DES was) IBM's SNA, which relied on HDLC and was well-suited
to CFB-1.  OFB was intended for fax machines.  Async and fax links didn't
need protection as long as error propagation of received data was very limited;
bisync and HDLC include error detection and retransmission by what we'd now
think of as the end-to-end link layer.  (On the IBM gear I worked with in the
late 1960s/early 1970s, the "controller" took care of generating the bisync
check bytes.  I no longer remember whether it did the retransmissions or
not; it's been a *long* time, and I was worrying more about the higher layers.)

In the second mental model for bisync and SNA, the sending host would have
generated a complete frame, including error detection bytes.  These bytes
would be checked after decryption; if the ciphertext was garbled, the error
check would fail and the messages would be NAKed (bisync, at least, used
ACK and NAK) and hence resent.  If they keying was garbled, though, nothing
would flow.  

It is not entirely clear to me what keying model IBM, NIST, or the NSA had
in mind back then -- remember that the original Needham-Schroeder paper didn't
come out until late 1978, several years after DES.  One commonly-described model
of operation involved loading master keys into devices; one end would pick
a session key, encrypt it (possibly with 2DES or 3DES) with the master key, 
and send that along.  From what I've read, I think that the NSA did have KDCs 
before that, but I don't have my references handy.  Multipoint networks
were not common then (though they did exist in some sense); you couldn't
go out to a KDC in real-time.  (I'll skip describing the IBM multipoint
protocol for the 2740 terminal; I never used them in that mode.  Let it
suffice to say that given the hardware of the time, if you had a roomful
of 2740s using multipoint, you'd have a single encryptor and single key 
for the lot.)

Anyway -- for most of the intended uses, error correction of the data was
either done at a different layer or wasn't important.  Keying was a
different matter.  While you could posit that it, too, should have been
wrapped in a higher layer, it is quite plausible that NSA wanted to guard
against system designers who would omit that step.  Or maybe it just
wasn't seen as the right way to go; as noted, layering wasn't a strong
architectural principle then (though it certainly did exist in lesser
forms).

		--Steve Bellovin, https://www.cs.columbia.edu/~smb







More information about the cryptography mailing list