[cryptography] [info] The NSA Is Building the Country’s Biggest Spy Center (Watch What You Say)

ianG iang at iang.org
Sun Mar 25 22:07:34 EDT 2012


On 26/03/12 12:22 PM, Seth David Schoen wrote:
> ianG writes:
>
>> On 26/03/12 07:43 AM, Jon Callas wrote:
>>
>>> This is precisely the point I've made: the budget way to break crypto is to buy a zero-day. And if you're going to build a huge computer center, you'd be better off building fuzzers than key crackers.
>>
>> point of understanding - what do you mean by fuzzers?
>
> Automatically trying to make software incur faults with large amounts of
> randomized (potentially invalid) input.


Thanks.  That's what I thought Jon meant.

Ok, so Jon's point is that the attacker is better off searching for 
weaknesses using fuzzers.  Than trying to exploit crypto.  With a huge 
data center.  Which costs gigabux, whereas fuzzing just costs centibux 
in power.

OK, slow today :)  Stop reading now, following is blabbering.


> https://en.wikipedia.org/wiki/Fuzz_testing

Side anecdote.  Funnily enough, when I was a 4th year thesis student 
back in 1983, I had two choices - Ada operating systems research, and a 
sort of reverse parser (inverse yacc) to generate random stuff to some 
defined description.  The purpose I imagined for my reverse parser idea 
was two-fold;  one was to generate large D&D maps without spending more 
time on them than the game itself, and the other was to generate feeds 
into programs to test them with.

(I chose the first direction for my thesis and it was a monumental 
failure.  Doh!)


> If you get an observable fault you can repeat the process under a
> debugger and try to understand why it occurred and whether it is an
> exploitable bug.  Here's a pretty detailed overview:
>
> https://www.blackhat.com/presentations/bh-usa-07/Amini_and_Portnoy/Whitepaper/bh-usa-07-amini_and_portnoy-WP.pdf
>
> When it was first invented, fuzzing basically just consisted of feeding
> random bytes to software,


In my use of fuzzing, all protocol or data classes have a function that 
generates a random example object.  Then, a test harness calls this, and 
uses that random example to network-send-and-recover the object.  Then, 
the object-compare function is used to test the result.  This process is 
of course recursive, and also cross-implementation.  In my experience 
this strategy has reduced simple protocol bugs by around two orders of 
magnitude, leaving more or less only semantic bugs.


> but now it can include sophisticated
> understanding of the kinds of data that a program expects to see, with
> some model of the internal state of the program.

Curious that they see it as externally created data wisdom.  My first 
impression is that this would be quite inefficient because it creates 
two centers of knowledge for one class.

It's a bit like the 1980s idea of putting the documentation inside the 
source - that had a dramatic effect in improving the quality of doco, 
because it reduced the centers of knowledge to one.

> I believe there are
> also fuzzers that examine code coverage, so they can give feedback to the
> tester about whether there are parts of the program that the fuzzer isn't
> exercising.



iang



More information about the cryptography mailing list