[cryptography] How are expired code-signing certs revoked?
smb at cs.columbia.edu
Wed Dec 7 14:48:36 EST 2011
On Dec 7, 2011, at 12:34 29PM, Jon Callas wrote:
> On 7 Dec, 2011, at 8:52 AM, Steven Bellovin wrote:
>> On Dec 7, 2011, at 11:31 23AM, Jon Callas wrote:
>>> But really, I think that code signing is a great thing, it's just being done wrong because some people seem to think that spooky action at a distance works with bits.
>> The question at hand is this: what is the meaning of expiration or revocation
>> of a code-signing certificate? That I can't sign new code? That only affects
>> the good guys. That I can't install code that was really signed before the
>> operative date? How can I tell when it was actually signed? That I can't
>> rely on it after the specified date? That would require continual resigning
>> of code. That seems to be the best answer, but the practical difficulties
>> are immense.
> I want to say that the answer is "mu" because you can't actually revoke a certificate. That's not satisfying, though.
It's certainly one possible answer, and maybe it's the only answer. For now, though, I'd like to assume that there can be some meaning but I at least don't know what it is.
> I think it is a policy question. If I were making a software development system that used certificates with both expiration dates and revocation, I would check both revocation and expiry. I might consider it either a warning or an error, or have it be an error that could be overridden. After all, how can you test that the revocation system on the back end works unless you can generate revoked software?
I'm not sure what you mean.
> On a consumer-level system, I might refuse to install or run revoked software; that seems completely reasonable. Refusing to install or run expired software is problematic -- the thought of creating a system that refuses to work after a certain date is pretty creepy, and the workaround is to set the clock back.
Yup. In fact, it's more than creepy, it's an open invitation to Certain Software Vendors to *enforce* the notion that you just rent software.
> But really, it's a policy question that needs to be answer by the creators of the system, not the crypto/PKI people. We can easily create mechanism, but it's impossible to create one-size-fits-all policy.
Right now, I'm speaking abstractly. I'm not concerned with current PKIs or pkis or business models or what have you. If you'd prefer, I'll rephrase my question like this: Assume that there is some benefit to digitally-signed code. (Note carefully that I'm not interested in how the recipient gets the corresponding public key -- we've already had our "PKI is evil discussion" for the year.) Given that there is a non-trivial probability that the private signing key will be compromised, what are the desired semantics once the user learns this. (Again, I'm saying nothing about how the user learns it -- CRLs or OSCP or magic elves are all (a) possible and (b) irrelevant.) If the answer is "it depends", on what does it depend? Whose choice is it?
Let's figure out what we're trying to accomplish; after that, we can try to figure out how to do it.
--Steve Bellovin, https://www.cs.columbia.edu/~smb
More information about the cryptography