[cryptography] Social engineering attacks on client certificates (Was ... crypto with a twist)

James A. Donald jamesd at echeque.com
Sat Oct 13 16:27:02 EDT 2012


On 2012-10-14 12:21 AM, Thierry Moreau wrote:
> ianG wrote:
>> On 10/10/12 23:44 PM, Guido Witmond wrote:
>>
>>> 2. Use SSL client certificates instead;
>>
>>
>> Yes, it works.  My observations/evidence suggests it works far better 
>> than passwords because it cuts out the disaster known as "I lost my 
>> password...."
>>
>> It is what we do over at CAcert, [...]
>
> Sorry for the long digression below, the overall concern bugs me somehow.
>
> There is no doubts that the CAcert usage of client certificates is an 
> interesting experiment/deployment.
>
> However, the limited value (of the CAcert activities enabled by a 
> valid client certificate) for attackers reduces the conclusions that 
> can be drawn from the deployment.
>
> When reviewing a security scheme design for a client organization, I 
> had to ask myself what a potential attacker would attempt if the 
> system was protecting million dollar transactions.
>
> Currently, one US bank usage of client certificates is attacked 
> (http://www.adp.com/about-us/trust-center/security-alerts.aspx, 
> "Fraudulent Emails Appearing to Come from ADP with Subject Line: ADP 
> Generated Message: First Notice - Digital Certificate Expiration").
>
> I have serious reservations about the vulnerability of "client 
> certificate" usage to such social engineering attacks. Here are some 
> of the questions.
>
> If we teach the user a long story about the *certificate* rules, how 
> can we expect him/her to pay attention to the *private*key*?
>
> Can't the user become confused as to PK data elements (certificate, 
> private key, public key, local decryption password, key pair, digital 
> signatures), their respective origin, their look-and-feel in the user 
> dialogs?
>
> Given this unavoidable state of confusion, how can the user defend 
> himself/herself against ill-intentioned guidance?
>
> If the user is given a genuine certificate containing privacy 
> sensitive subject name data, how do you expect him/her to react to the 
> information that the basic Internet protocol (TLS) exposes such data 
> in the clear to eavesdroppers? How can you expect him/her to protect 
> the private key once the certificate privacy lesson has been found bogus?
>
> If the user is given a certificate devoid of privacy sensitive subject 
> name data (e.g. self-signed, auto-issued, or obtained from 
> https://www.ecca.wtmnd.nl/ -- the proof of concept in the original 
> post), how do you expect him/her to pay any attention to protecting 
> anything?
>
> Can anyone tell me (I am the user now) which software component and 
> which computing environment I need to trust to be confident about the 
> strength of the RSA key generated for me when I got a certificate from 
> https://www.ecca.wtmnd.nl/? Actually I would like to know how could I 
> learn by myself how the RSA key was generated for me? What is 
> security-critical in this certificate granting process?
>
> Given that I exported the certificate obtained from 
> https://www.ecca.wtmnd.nl/ and I used openssl pkcs12 and open pkcs8 
> utilities to "look under the hood" of the RSA private key, at which 
> point in the enrollment process should I have been warned against 
> these steps (or equivalent actions suggested in a social engineering 
> attack)?
>
> As a concluding remark, I am nonetheless confident about the public 
> key techniques potential for improvements over the password-based 
> authentication paradigm. But I have difficulty with this widespread 
> abuse of language that equates client certificates with client 
> public-private key pairs. I'm afraid many security experts would even 
> have difficulty in clarifying the two notions. The fact that the 
> PKCS#12    format encryption covers both the private key and the 
> certificate does not help (you need to enter the private key access 
> password for accessing the certificate or even just the public key in 
> a PKCS#12 file).
>
> Thanks in advance for sharing your views.
>
>

Because humans cannot themselves perform cryptographic operations, nor 
remember public keys as names of entities, the user interface becomes 
part of the security problem.  It is typically the unsecured part of a 
secured channel, the weak link in the chain.

Thus a security proposal needs to be described with a description 
centered on its user interface and perceived behavior.  The security 
behavior should reflect the user interface - it should behave as the 
user expects.

Many of the problems you describe arise because the email interface is 
inconsistent with its actual security properties:  An email that appears 
to come from Bank Alice does not necessarily come from Bank Alice.

Thus general security requires a secure name system, Zooko's triangle, 
which requires not just a bunch of cryptographic algorithms, but a bunch 
of tools for managing and sharing information about names - requires a 
whole lot of secure user interface.

Your browser bookmark list, and your various contacts lists /almost/ 
support Zooko's triangle, and /almost/ have Zooko like behavior, but 
have various small subtle deviations in their behavior that make them 
not quite suitable.

This is in part because they were built without concern for security, 
and in part because they are built on top of a system that is wildly 
different from Zooko's triangle.

In a system based on Zooko's triangle, you would not have DNS, for DNS 
exists to render true names for network addresses humanly memorable, and 
in Zooko's triangle, true names for network addresses are not humanly 
memorable.  Thus building a Zooko system on top of the existing system 
turns out to be problematic, even though in practice DNS urls are seldom 
all that humanly memorable, so that actual usage and actual user 
interfaces have become Zooko like, it insecurely maps non unique human 
memorable names to unique, non human memorable, insecure names.  A 
secure naming system would securely map non unique human memorable names 
to unique non human memorable cryptographically secure names.

DNS requires a center, since the supply of human memorable true names is 
limited, and therefore true names have to have a price. This center 
leads to no end of security problems.   A system in which true names are 
or contain hashes of rules identifying public key chains can be 
centerless, and therefore end to end.


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.randombit.net/pipermail/cryptography/attachments/20121014/e6f2442e/attachment.html>


More information about the cryptography mailing list