[cryptography] High-AT proof-of-work

Elias Yarrkov yarrkov at gmail.com
Sat Jan 19 16:41:32 EST 2013


On Sat, Jan 19, 2013 at 1:28 PM, Elias Yarrkov <yarrkov at gmail.com> wrote:
> On Sat, Jan 19, 2013 at 12:20 PM, CodesInChaos <codesinchaos at gmail.com> wrote:
>> 1. Run the sequential squaring algo, but only store every 2^{b/4} value.
>> This takes around 2^{b/2} time and 2^{b/4} memory (product 2^{b*3/4}
>> 2. Run parallel squaring from all checkpoints stored in step 1. Use 2^{b/4}
>> small cores, taking 2^{b/4} time, and 2^{b/2} memory (product 2^{b*3/4})
>>
>> Cost is something like 2^{b*3/4}
>
> In step 1, most of the memory of this machine is wasted. Utilized or not, it
> must be constructed to run step 2. If one builds a machine using this approach
> to solve a large number of challenges, can it be made useful? I can't think of
> a way.

I did think of a way to get to 2^{b*3/4} when many challenges are targeted.

The machine is like this:
2^{b/4} step 1 circuits, pipelined to output once every 2^{b/4} time.
One step 2 circuit, getting input from step 1 circuits.

The size of this machine is 2^{b/4}*2^{b/4}+2^{b/2}, i.e. 2^{b/2}.
It has a throughput of one challenge per 2^{b/4} time.
That looks like 2^{b*3/4}.

How one would actually find a collision in step 2 isn't immediately obvious;
2^{b/4} cores with shared fast random access memory is problematic. But one can
modify step 2 and use Schimmler sorting to find a collision, while staying
O(2^{b*3/4}).

That's still pretty good, though. When a simple one-threaded program solves a
challenge in O(2^c) time, here are some numbers for custom hardware:
Traditional POW: ASIC of size O(2^c) solves one challenge per O(1) time.
This POW: ASIC of size O(2^c) solves one challenge per O(2^{c/2}) time.

Assuming no better implementations than O(2^{b*3/4}). ;)

-- 
yarrkov -- http://cipherdev.org/



More information about the cryptography mailing list