[cryptography] How big a speedup through storage?

Lodewijk andré de la porte l at odewijk.nl
Sat Jun 21 08:35:26 EDT 2014


2014-06-20 21:46 GMT+02:00 Greg Zaverucha <gregz at microsoft.com>:

> Hi Lodewijk
> Here are some relevant references.
>

Thanks!


> A cryptanalytic time-memory trade-off.
> ME Hellman - IEEE Transactions on  Information Theory, , 1980
> http://www.cs.miami.edu/home/burt/learning/Csc609.122/doc/36.pdf
>

This is pretty basic. It's become common knowledge, apparently a valueable
paper then :). This one is a little more archaic than my current timespan
allows me to process properly.

It's also important to remember that whatever tricks will be used they are
most likely not public knowledge.


> Making a Faster Cryptanalytic Time-Memory Trade-Off.
> P. Oechslin.  CRYPTO 2003
> http://lasec.epfl.ch/pub/lasec/doc/Oech03.pdf
>

Ah yes, rainbow tables :) I remember making one for MD5 in school with PHP
and MySQL. Worked decently, but not enough storage/preprocessing time to
really get further than those available for free online. Good fun though.
PHP is very easy to just use. I think mine wasn't as good as it could've
been if I'd had proper references. No way to dig it up though.

Some funny stuff here too; "They present a detailed analysis which is
backed up by simulation in a purpose-built FPGA.". What is a purpose-built
FPGA? FPGA's are adaptable, that's the whole point! If it were purpose
built it'd be an ASIC, and not an FPGA at all. Nice paper, but already more
or less "common knowledge".


> Understanding brute force.
> Daniel J. Bernstein
> http://cr.yp.to/snuffle/bruteforce-20050425.pdf
>

First line hits home "There is a widespread myth that parallelizing a
computation cannot improve its price-performance ratio.". The term
price-performance is accurate, but price no longer means what it should for
the scale of the agency concerned. It's more about humanly possible, I
think.

It's interesting that the whole paper comes no futher than showing that the
myth does not hold true. That's good, but it does not go as far as it could
at all.

It also has a more general comment;" “*But 256-bit AES deliberately uses 14
rounds to keep us secure against non-brute-force attacks!” some people will
argue. “There’s an 8-round attack taking time only 2 204 , for example.
Okay, okay, that’s on a machine with 2 104 bits of memory, but maybe some
additional ideas will produce a 10-round attack more efficient than a
256-bit parallel brute-force key-search attack.” *". It is my understanding
that the rounds are required for diffusing the input. Using less rounds
than a certain threshold fails to achieve sufficient blending of the input,
and creates strong patterns in the output. In other words: it completely
breaks diffusion. Therefore the detriment to the security factor of AES is
much stronger than is suggested in this alinea, as patterns arise where
there previously weren't.


The type of attack that I see the most profit in would be one that adopts a
variant of a "meet in the middle" attack. It moves away futher from the
centralized point of execution. We have a function family generator F(key,
step) that can create a function to be performed on a cyphertext C(step),
where key is a random-walk-value used in the attempted decryption and the
step is a "class". The function will only execute on programs with the
right class. F's results are compared to succesfully completed decryptions,
possibly seeded with foreknowledge.

F is supposed to generate/consist of functions that find patterns or
perform mutations. In a serial machine this would be completely unfeasible,
not unlike Neural Network simulation is uninteresting. But in this parallel
machine "transient" paths will be untangled by converging on a nearby
physical location. IOW: the result of a function generated by F determines
the next function to be executed, and the functions are mapped out over the
machine. The machine will thus automatically bring similar results closer
together which allows for deduplication. If one is able to deduplicate with
a known-decypherment then a more serial/central process is supposed to take
note of that, and test a key coming forth from that or at least shift the
probability of working with a certain class of keys (that share a property
extracted by the functions).

This is a meet in the middle attack because in the end there will have to
be brute forcing, but the intermediate results are an optimization of it.
Additionally there will be many "middle points" floating around the machine
at any given point in time.

This is not the whole story however. The really interesting trick is that
many different cyphertexts will propagate through the machine at the same
time. Not only are middlepoints matched within a certain cyphertext, but
also across multiple cyphertexts. This could be especially interesting when
the key is (partially) shared.

The really hard thing is the construction of F. F will have to generate
functions with an understanding of how patterns should arise given certain
steps, and how they shouldn't. There's an advantage to integrating common
plaintexts, as their mutations should leave a certain pattern. Etc. etc. I
could go on and try to work this out, I don't have time now.

Short: There may be patterns between different decodings. There is ways
that a parallel computation does not need longer-term memory to store
intermediate results, and that could be much faster. There is potentially
profit in decoding many messages truly simultaneously. There is less
research into parallel algorithms than in sequential algorithms, so we
might just not know so well. There may be heuristic manners in intermediate
AES decodings that we have not yet found. Etc.

I'm more interested in finding out how well it's been thought trough than
in rambling about transcendent patterns and wild ideas as I did above. It's
interesting to think of how to solve hard problems, but really hard to
communicate (that's why papers are so hard to read) and to prove/test
(that's why papers need to be so hard to read) a theory. Let alone actually
do it! But the agency has a history of doing just that, without anybody
outside the agency knowing about it.

If indeed the agency would like to spend so much on so complicated a
machine then it would prefer to lift part of it's veil then to let
speculation run wild. If Area 51 was publicly exposed to be a ridiculously
expensive airplane-testing facility then it wouldn't be so interesting, and
discussion would be about the price not "alien technology in aircraft". To
use an analogy: the trick of good magicians isn't that they hide their
movement so well, it's that they divert attention from the movements. If
the agency hadn't leaked to use that they want a sort of Tempest (UK
program keeping all Internet traffic for 3 days) we would be lyrical and
frantic about what's going to happen in that facility.

Let's consider something else it might be. Maybe we'll learn something
about crypto just trying to see how they'd want to do it!  :)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.randombit.net/pipermail/cryptography/attachments/20140621/5fe3629f/attachment.html>


More information about the cryptography mailing list