[cryptography] preventing protocol failings

Marsh Ray marsh at extendedsubset.com
Wed Jul 13 03:34:48 EDT 2011


On 07/13/2011 01:01 AM, Ian G wrote:
> On 13/07/11 9:25 AM, Marsh Ray wrote:
>>
>> But the entire purpose of securing a system is to deny access to
>> the protected resource.
>
> And that's why it doesn't work; we end up denying access to the
> protected resource.

Denying to the attacker - good.

Denying to the legitimate user - unfortunately unavoidable some of the
time. The main purpose of authentication is to decide if the party is,
in fact, the legitimate user. So that process can't presume the outcome
in the interest of user experience.

I mis-type my password a significant percentage of the time. Of course I
know it's me but it would be absurd for the system to still log me in.
Me being denied access is a "bad user experience"TM (especially compared
to a system with no login authentication at all) but it's also necessary
for security.

However, a scheme which allowed me to log in with N correct password
characters out of M could still be quite strong (with good choices for N
and M) but it would allow for tuning out the bad user experiences to the
degree allowed by the situation-specific security requirements.

> Security is just another function of business, it's not special.

I disagree, I think it depends entirely on the business. Quite often
there are multiple parties involved with very divergent interests.

> The purpose of security is to improve the profitability of the
> resource.

Often the purpose is to reduce existential risks.
>> I think it's such a law of nature that any control must present at
>> least some cost to the legitimate user in order to provide any
>> effective security. However, we can sometimes greatly optimize this
>> tradeoff and provide the best tools for admins to manage the
>> system's point on it.
>
> Not at all. I view this as hubris from those struggling to make
> security work from a technical pov, from within the box. Once you
> start to learn the business and the human interactions, you are
> looking outside your techie box. From the business, you discover
> many interesting things that allow you to transfer the info needed to
> make the security look free.

Well, you're right, except that it's not so much hubris as it is being
aware of one's limitations. The more general-purpose the protocol or
library is that you're working on, the less you can know about the
scenarios in which it will eventually be deployed.

You can't even take for granted that there even is a "business" or
primarily financial interest on either endpoint. The endpoints needing
to securely communicate may be a citizen and their government, an
activist and a human rights organization, a soldier and his weapons
system, or a patient and their embedded drug pump.

> A couple of examples: Skype works because people transfer their
> introductions first over other channels, "hey, my handle is bobbob",
> and then secondly over the packet network. It works because it uses
> the humans to do what they do naturally.

Yeah, it's a big win when the users can bring their pre-established
relationships to bootstrap the secure authentication. This is the way
the Main St. district worked in small towns - you knew the hardware
store guy, you knew the barber, etc. Even if not, an unfamiliar business
wouldn't be around long without the blessing of the mayor and town cop.

But this is the exact opposite model that Netscape (and friends) used
for ecommerce back in the early 90s. They recognized that the key
property necessary to enable the ecommerce explosion was for users to
feel comfortable doing business with merchants with which they had no
prior relationship at all. In order for this to happen there needed to
be a trusted introducer system and the CA system was born. This system
sucks eggs for many things for which it is used, but it is an undeniable
success at its core business goal: the lock icon has convinced users
that it's safe enough to enter their credit card info on line.

> 2nd. When I built a secure payment system, I was able to construct a
> complete end-to-end public infrastructure without central points of
> trust (like with CAs). And I was able to do it completely. The
> reasons is that the start of the conversation was always a. from
> person to person, and b. concerning a financial instrument. So the
> financial instrument was turned into a contract with embedded crypto
> keys. Alice hands Bob the contract, and his softwate then bootstraps
> to fully secured comms.

Ask yourself if just maybe you picked one of the easier problems to
solve? One where the rules and the parties' motivations were all
well-understood in advance?

> No, it's much simpler than that: denying someone security because
> they don't push the right buttons is stilly denying them security.

I don't understand. Are you speaking of denying them access to the
protected resource, or are you saying they are denied some nebulous form
of "security" in general?

> The summed benefit of internet security protocols typically goes up
> with the number of users, not with the reduction of flaws. The
> techie view has it backwards.

Maybe, typically.

But how do you know in advance what information will the system be used
to protect and which situations are going to be "typical"? Do you
understand it well enough in advance that you can document all the
considerations in the RFC?

> This is a curiousity to me; has anyone actually figured out how to
> find a marketplace full of security conscious users?

McAfee, Symantec, Kaspersky, ZoneAlarm, ... these are examples of
endpoint security - but that's all the user really controls. A user who
spends $50 on a security package for their PC displays a far greater
allocation for security as a percentage of their tech budget than most
online businesses.

For internet web-based systems most of the security resides on the
server side. It's almost never the end user who writes the check for the
data security expenditure.

> Was there ever such a product where vendors successfully relied upon
> the users' good security sense?

Again, Gmail has added an option for two-factor authentication which
undoubtedly costs them money and at the end of the day makes it a little
bit harder for the legitimate users to log in to their accounts.

Activision/Blizzard has added a OAuth based 2FA option for millions of
users on battle.net. They even charge the users shipping & handling to
deliver the hardware token.

When users feel like they're protecting something of value (even their
time) they will make the choice for more security. It's when users place
little value on what they entrust to the protocol or other party that
they act like they consistently don't care. "Why would anyone go out of
their way to hack *my* boring email messages?"

- Marsh



More information about the cryptography mailing list