[cryptography] preventing protocol failings
zooko at zooko.com
Fri Jul 22 14:17:21 EDT 2011
On Tue, Jul 12, 2011 at 5:25 PM, Marsh Ray <marsh at extendedsubset.com> wrote:
> Everyone here knows about the inherent security-functionality tradeoff. I think it's such a law of nature that any control must present at least some cost to the legitimate user in order to provide any effective security. However, we can sometimes greatly optimize this tradeoff and provide the best tools for admins to manage the system's point on it.
>From http://www.hpl.hp.com/techreports/2009/HPL-2009-53.pdf :
Most people agree with the statement, ―There is an inevitable tension
between usability and security. We don’t, so we set out to build a
useful tool to prove our point.”
> Hoping to find security "for free" somewhere is akin to looking for free energy. The search may be greatly educational or produce very useful
related discoveries, but at the end of the day the laws of
thermodynamics are likely to remain satisfied.
If they've done what they claim (which I find plausible), then how
could it be possible? Where does this "free energy" come from?
I think it comes from taking advantage of information which is already
present but which is just lying about unused by the security
mechanism: expressions of intent that the user makes but that some
security mechanisms ignore.
For example, if you send a file to someone, then there is no need for
your tools to interrupt your workflow with security-specific
questions, like prompting for a password or access code, popping up a
dialog that says "This might be insecure! Are you sure?", or asking
you to specify a public key of your recipient. You've already
specified (as part of your *normal* workflow) what file and who to
send it to, and that information is sufficient the security system to
figure out what to do. Likewise there is no need for the recipient of
the file to have her workflow interrupted by security issues.
Again, the point is that *you've already specified*. The human has
already communicated all of the necessary information to the computer.
Security tools that request extra steps are usually being deaf to what
the human has already told the computer. (Or else they are just doing
"CYA Security" a.k.a "Blame The Victim Security" where if anything
goes wrong later they can say "Well I popped up an 'Are You Sure?'
dialog box, so what happened wasn't my fault!".)
Okay, now I admit that once we have security tools that integrate into
user workflow and take advantage of the information that is already
present, *then* we'll still have some remaining hard problems about
fitting usability and security together.
More information about the cryptography