[cryptography] preventing protocol failings

Jon Callas jon at callas.org
Tue Jul 5 01:59:17 EDT 2011


On Jul 4, 2011, at 4:28 PM, Sampo Syreeni wrote:

> (I'm not sure whether I should write anything anytime soon, because of Len Sassaman's untimely demise. He was an idol of sorts to me, as a guy who Got Things Done, while being of comparable age to me. But perhaps it's equally valid to carry on the ideas, as a sort of a nerd eulogy?)
> 
> Personally I've slowly come to believe that options within crypto protocols are a *very* bad idea. Overall. I mean, it seems that pretty much all of the effective, real-life security breaches over the past decade have come from protocol failings, if not trivial password ones. Not from anything that has to do with hard crypto per se.

Let me be blunt here. The state of software security is so immature that worrying about crypto security or protocol security is like debating the options between hardened steel and titanium, when the thing holding the chain of links to the actual user interaction is a twisted wire bread tie. 

Lots of other discussion is people noting that if you coated that bread tie with plastic rather than paper, it would be a lot more resistant to rust. And you know what, they're right!

In general, the crypto protocols are not the issue. I can enumerate the obvious exceptions where they were a problem as well as you can, and I think that they prove the rule. Yeah, it's hard to get the crypto right, but that's why they pay us. It's hard to get bridges and buildings and pavement right, too.

There are plenty of people who agree with you that options are bad. I'm not one of them. Yeah, yeah, sure, it's always easy to make too many options. But just because you can have too many options that doesn't mean that zero is the right answer. That's just puritanism, the belief that if you just make a few absolute rules, everything will be alright forever. I'm smiling as I say this -- puritanism: just say no.

> 
> So why don't we make our crypto protocols and encodings *very* simple, so as to resist protocol attacks? X.509 is a total mess already, as Peter Gutmann has already elaborated in the far past. Yet OpenPGP's packet format fares not much better; it might not have many cracks as of yet, but it still has a very convoluted packet structure, which makes it amenable to protocol attacks. Why not fix it into the simplest, upgradeable structure: a tag and a binary blob following it?

Meh. My answer to your first question is that you can't. If you want an interesting protocol, it can't resist protocol attacks. More on that later.

As for X.509, want to hear something *really* depressing? It isn't a total mess. It actually works very well, even though all the mess about it is quite well documented. Moreover, the more that X.509 gets used, the more elegant its uses are. There are some damned fine protocols using it and just drop it in. Yeah, yeah, having more than one encoding rule is madness, but to make that make you run screaming is to be squeamish. However, the problems with PKI have nothing to do 

OpenPGP is a trivially simple protocol at its purest structure. It's just tag, length, binary blob. (Oh, so is ASN.1, but let's not clutter the issue.) You know where the convolutedness comes from? A lack of options. That and over-optimization, which is actually a form of unneeded complexity. One of the ironies about protocol design is that you can make something complex by making it too simple.

I recommend Don Norman's new book, "Living With Complexity." He quotes what he calls Tesler's law of complexity, which is that the complexity of a system remains constant. You can hide the complexity, or expose it. If you give the user of your system no options, it means you end up with a lot of complexity underneath. If you expose complexity, you can simplify things underneath. The art is knowing when to do each.

If you create a system with truly no options, you create brittleness and inflexibility. It will fail the first time an underlying component fails and you can't revise it. 

If you want a system to be resilient, it has to have options. It has to have failover. Moreover it has to failover into the unknown. Is it hard. You bet. Is it impossible? No. It's done all the time.

I started off being a mathematician and systems engineer before I got into crypto. I learned about building complex systems before I learned crypto, and complexity doesn't scare me. I look askance at it, but I don't fear it.

Yes, yes, simpler systems are more secure. They're also more efficient, easier to build, support, maintain, and everything else. Simplicity is a virtue. But it is not the *only* virtue, and I hope you'll forgive me for invoking Einstein's old saw that a system should be as simple as possible and no simpler. 

I think that crypto people are scared of options because options are hard to get right, but one doesn't get away from options by not having them. The only thing that happens is that when one's system fails, someone builds a completely new one and writes papers about how stupid we were at thinking our system would not need an upgrade. Options are hard, but you only get paid to solve hard problems.


> 
> Not to mention those interactive protocols, which are even more difficult to model, analyze, attack, and then formally verify. In Len's and his spouse's formalistic vein, I'd very much like to simplify them into a level which is amenable to formal verification. Could we perhaps do it? I mean, that would not only lead to more easily attacked protocols, it would also lead to more security...and a eulogy to one of the new cypherpunks I most revered.

Do you really think you can formally verify a protocol? I don't. In my checkered past, I worked on secure operating systems, and no one has every made a verified system that has any expressive power in it.

Let's face it, if your system is as expressive as arithmetic, then you *can't* verify it. Sure, you might come close enough, but that's not verified.

So here's my promised conundrum. SSH has been proven to be secure. SSH was also broken. Paterson and Watson broke it a year or so ago. It wasn't a big break. It wasn't even a practical one in a lot of ways. It got fixed in software.

But there was a real struggle to get it fixed in software because everyone knew there was a proof of security, and hey, if there's a proof then the break must not exist, right?

The hinge point of the break, its fulcrum as we might call it, was that some structure lengths got encrypted in such a way that that gave known plaintext that could be efficiently used to break the protocol. The proof doesn't have anything in it about lengths. They're outside the proof's formal language. I could ramble on that at length, but it doesn't matter.

The important point is that if you design a secure protocol, you formally verify it, and then you implement, how do you know that the implementation didn't accidentally bring in some feature that to the right clever person is a security flaw.

I believe you can't, and I think there are mathematical reasons for that. But again -- stick to the practicalities. Given this existence proof of a flawed implementation of a secure protocol, how do you prove that your verification has any value at all once coded up?

	Jon




More information about the cryptography mailing list