Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

Infinite Compression techniques

Name: FrozenVoid !!mJCwdV5J0Xy2A21 2011-11-09 9:48

I present a system which would be capable of unlimited compression of any data.
Theory:
Every number can be mapped 1:1 to positive unsigned integer(representing the number itself)
Every integer can be represented as range of floating point numbers
i.e. 3 is range from 3.000... to 3.999...
Now if we multiply the original number by 10: 3*10=30
the range is also multiplied, 30.000... to 39.999... all of these numbers divided by 10 give 3 as integer.
Suppose we can alter original number by shifting the range up or down by supplying an extra factor
3+1 or 3-1, with these 3.000...-3.999... ranges become 4.000...-4.999.. and 2.000...-2.999... respectively
The compression is as follows. The original number is multiplied by a huge scale to create number
which is at least twice longer in file length, giving very large floating point range.
Now we multiply this range by adding a 64bit scale modifier(applied to original number) which shifts the range up and down so the space of the range is now 2^64 times bigger than original.
The compression is search for Any number in that huge range which can be represented more compactly
when one of these is found(for some function like e^A) A is recorded along with scale modifier.
Since the range is enormous there are certainly some numbers which can be represented in short form as
function(x)=number_in_range.
The decoding is as follows,function(x) is runs and results number is divided by scale, then a scale modifier is applied
to get the original number, which is converted to file.

Name: Anonymous 2012-07-18 20:37

>>160
OVER 150LBS OF melancholy and regret :(

Name: Anonymous 2012-07-18 20:58

uh, i think there's already a compression like that.

weight characters into 0-1 space by entropy in file, pick the space of the first character in the file, map original space into that space, pick next, etc etc, pick a random fixed point number in final space. it's like other compression methods only nobody uses it because its shitty.

Name: Anonymous 2012-07-18 21:00

>>162
PIGEONHOLE PRINCIPLE MOTHERFUCKER DO YOU SPEAK IT

Name: Anonymous 2012-07-18 21:03

>>163
i'm well aware of pigeon hole principle dumbass. do you notice a) you weight the points by entropy. its equivalent in compression to huffman and b) you pick a fixed point value in the end, fixed point values take up as much space as they have precision. hence: its a shitty compression and nobody uses it. that doesn't change the fact that it exists and i had to learn about it in class.

Name: Anonymous 2012-07-19 8:10

>>162
Are you trying to describe http://en.wikipedia.org/wiki/Arithmetic_encoding, that everybody wants to use because it's AWESOME, but it's patented, so freetards generally avoid it?

Name: Anonymous 2012-07-19 11:56

Arithmetic encoding is a lot of hot wind. It relies  strongly on a pattern predictor, as it encodes the difference between the predictor and the actual data. A badly chosen predictor will ruin your compression ratio.

Also, I do not understand the patents. I have the article "Practical implementations of arithmetic coding" from the book "Image and Text compression (1992)". It does not speak of patents, only the Nasa grant that supported it. I also browsed through a couple of the patents mentioned in the wiki page, and they are just silly as they seem to patent certain way of implementation, not the algorithm. Once again it is illustrated that software patents are just bull and need to be abolished. Guess the patent layers make too much money from it.

Name: Anonymous 2012-07-19 22:11

>>165

I think ey is talking about huffman coding, which is good when your file consists of symbols that have an uneven distribution. The number of bits required to represent each symbol is adjusted so that the more frequent symbols get shorter representations, and the less frequent symbols get longer representations. If the frequent symbols show up a lot, the net size of the file can be reduced. This technique isn't effective on data with an even distribution of symbols. There needs to be symbols with high frequency counts, like the letter e and the space in english text.

Name: Anonymous 2012-07-19 22:18

>>165
yeah thats the one, i forgot the name. how is it at all different in compression ratio from huffman coding. (not saying its not i just don't get why it would be)

Name: Anonymous 2012-07-19 22:27

>>167
ey
Fuck off and die of a slow debilitating disease you little faggot dipshit.  If it ain't singular they, it's crap.

Name: Anonymous 2012-07-20 1:01

>>169

reevaluate your hateful language, friend. i think you will find that any negative connotations you would seek to attach to "faggot", or "nigger" as the mood fits you, actually apply to yourself, sir.

Name: Anonymous 2012-07-20 1:36

>>170
I don't think so, fagstorm.  Fuck you and die.  And I'm no homophobe nor racist.

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List