Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

Infinite Compression techniques

Name: FrozenVoid !!mJCwdV5J0Xy2A21 2011-11-09 9:48

I present a system which would be capable of unlimited compression of any data.
Theory:
Every number can be mapped 1:1 to positive unsigned integer(representing the number itself)
Every integer can be represented as range of floating point numbers
i.e. 3 is range from 3.000... to 3.999...
Now if we multiply the original number by 10: 3*10=30
the range is also multiplied, 30.000... to 39.999... all of these numbers divided by 10 give 3 as integer.
Suppose we can alter original number by shifting the range up or down by supplying an extra factor
3+1 or 3-1, with these 3.000...-3.999... ranges become 4.000...-4.999.. and 2.000...-2.999... respectively
The compression is as follows. The original number is multiplied by a huge scale to create number
which is at least twice longer in file length, giving very large floating point range.
Now we multiply this range by adding a 64bit scale modifier(applied to original number) which shifts the range up and down so the space of the range is now 2^64 times bigger than original.
The compression is search for Any number in that huge range which can be represented more compactly
when one of these is found(for some function like e^A) A is recorded along with scale modifier.
Since the range is enormous there are certainly some numbers which can be represented in short form as
function(x)=number_in_range.
The decoding is as follows,function(x) is runs and results number is divided by scale, then a scale modifier is applied
to get the original number, which is converted to file.

Name: Anonymous 2011-11-13 7:10

in response to your theorem

let's say your original number is 2 (4 bytes let's assume), you have a file of 2.4MiBs, you multiply 2 by 2516582 = 5033164 (bytes and range)
so before you had 2 - 2.(9) and now 10066328 - 15049160.36.
now we multiply this range with a 64 bit scale modifier (as in shift), if we apply this to the original number 2 you get 2^64, which doesn't serve any purpose really.
finally the compression, you say, is possible by searching numbers in a huge range (must be 10066328 - 15049160.36 since that's the greatest range yet), when they are found we disregard the original number and save the multiplier and scale modifier (don't know what for though). Now since the range is really enormous you say we need to index the actual number is the data possible and provide no way to do this.
So as far as encrypting goes you basically generate garbage and say our data must be in there somewhere and provide no way to guess it. Even if you were to implement a deterministic pseudo-random generator and store the number of iterations needed to get the actual data the time needed to archive something would be proportional to its size and grow very, very rapidly.

As for the decoding it seems straightforward you generate the mess and you have means to know where it should be and go get it, fine.

It was still a very poor explanation of a stupid way of archiving something, which will still fail since you can only encode all possible data streams with an infinite and really random yet deterministic (let's say predictable instead, reproduceable with a function) stream generator.

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List