>>100
J was the set of translation inputs
ji that are applied to each
xi in
X for
i = 1, ...,
N, where
xi +
ji =
yi in
Y.
In other words, I'm not sure what effect ``offsets'' not all being different has. Of course, if they're all
different, but at the same time
synchronizing, as in the function:
f(x) = (x + j) mod N; where j = (N - x)
Then this is a hash function made entirely of collisions, for
setting N = 10
f(5) = (4 + (10 - 4)) mod 10 = 0
f(5) = (5 + (10 - 5)) mod 10 = 0
f(6) = (6 + (10 - 6)) mod 10 = 0
...
Even though each value
ji for
xi is unique (being effectively just an inversion of the set
X)...
Anyway, I use C. I just wrote something to generate a lot of random perfect hash functions and generate all subsets of size
c which were then applied as operations and got some stats out of it.
No real need for bignum stuff because if you work with subset generation / set operations beyond what a long int can hold you're going to need a supercomputing cluster anyway. Mostly just trying to figure out where to start.