[cryptography] Non-governmental exploitation of crypto flaws?

Jon Callas jon at callas.org
Tue Nov 29 04:52:18 EST 2011

On Nov 27, 2011, at 12:10 PM, Steven Bellovin wrote:

> Does anyone know of any (verifiable) examples of non-government enemies
> exploiting flaws in cryptography?  I'm looking for real-world attacks on
> short key lengths, bad ciphers, faulty protocols, etc., by parties other
> than governments and militaries.  I'm not interested in academic attacks
> -- I want to be able to give real-world advice -- nor am I looking for
> yet another long thread on the evils and frailties of PKI.

Steve, it's hard to know how to answer that, really. I often quote Drew Gross, "I love crypto, it tells me what part of the system not to bother attacking." I'd advise anyone wanting to attack a system that they should look at places other than the crypto. Drew cracked wise about that to me in 1999 and I'm still quoting him on it.

If you look at the serious attacks going on of late, none of them are crypto, to the best of my knowledge, anyway. The existing quote-quote APT attacks are simple spear-phishing at best. A number of them are amazingly simplistic. 

We know that the attack against EMC/RSA and SecureID was done with a vuln in a Flash attachment embedded in an Excel spreadsheet. According to the best news I have heard, the Patient Zero of that attack had had the infected file identified as bad! They pulled it out of the spam folder and opened it anyway. That attack happened because of a security failure on the device that sits between the keyboard and chair, not for any technology of any sort.

There are also a number of cases where suspects or convicted criminals in the hands of powerful governments along with their encrypted data have not had their crypto broken. Real world evidence says that if you pick a reasonably well-designed-and-implemented cryptosystem (like PGP or TrueCrypt) and exercise good OPSEC, then your crypto won't be broken, even if you're up against the likes of First World TLAs.

I have, however, hidden many details in a couple of phrases above, especially the words "exercise good OPSEC."

If we look at it from the other angle, though, one of the cautionary tales I'd tell, along with a case study is the TI break. The fellow who did it announced on a web board that <very long number> equals <long number_1> times <long number_2>. People didn't get it, so he wrote it out in hex. They still didn't get it, and he pointed out that the very long number could be found in a certain certificate. The other people on the board went through all of Kubler-Ross's stages in about fifteen posts. It's hilarious to read. The analyst said that he'd sieved the key on a single computer in -- I remember it being about 80 days, but it could be 60ish. Nonetheless, he just went and did it.

On the one hand, he broke the crypto. But on the other hand, we had all known that 512-bit numbers can be quasi-easily factored. It was a shock, but not a surprise. 

Another thing to look at would be the cryptanalysis of A5/n over the years. Certainly, there's been brilliant cryptanalysis on those ciphers. But it's also true that the people who put them in place willfully avoided using ciphers known to be strong. It is as if they built their protocols so that they could hack them but they presumed we couldn't. We proved them wrong. Does that really count as cryptanalysis as opposed to puncturing arrogance?

If you want to look at protocol train wrecks, WEP is the canonical one. But that one had at its core the designers cheaping out on the crypto so that the hardware could be cheaper. I think it is a good exercise to look the mistakes in WEP, but a better one is to look at creating something significantly more secure within the same engineering constraints. You *can* do better with about the same constraints, and there are a number of ways to do it, even.

I can list a number of oopses of lesser degrees, where someone took reasonable components and there were still problems with it. But I really don't think that's what you're asking for, either.

The good news we face today is that there really isn't any snake oil any more. If there is anything that we can be proud of as a discipline, it's that the problems we face are genuine mistakes as opposed to genuine or malicious not understanding the problem. 

The bad news is that there are two major problems left. One is mis-use of otherwise mostly okay protocols. Users picking crap passwords is the most glaring example of this. There are a number of well-tested cryptosystems out there that are nearly universally used badly.

But the other one is Drew Gross's observation. If you think like an attacker, then you're a fool to worry about the crypto. Go buy a few zero days, instead. But that's only if you don't want to be discovered afterwards. If you don't care, there are so many unpatched systems out there that scattershotting well-crafted spam with a Flash exploit works just fine.

What I'm really saying here is that in the chain of real security, crypto is not the weak link. It's the strong one. I'm not saying that it can't be made stronger, nor should I it shouldn't be made stronger. Also, we have to continue to teach people that writing cryptosystems is very, very, very hard. 

I'm also not offering it as good news, but as bad news. Think of it this way, in the roadside rest stop of security, the crypto restroom is the cleanest one. That should depress everyone who knows how dirty ours is.


More information about the cryptography mailing list