Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

Infinite Compression techniques

Name: FrozenVoid !!mJCwdV5J0Xy2A21 2011-11-09 9:48

I present a system which would be capable of unlimited compression of any data.
Theory:
Every number can be mapped 1:1 to positive unsigned integer(representing the number itself)
Every integer can be represented as range of floating point numbers
i.e. 3 is range from 3.000... to 3.999...
Now if we multiply the original number by 10: 3*10=30
the range is also multiplied, 30.000... to 39.999... all of these numbers divided by 10 give 3 as integer.
Suppose we can alter original number by shifting the range up or down by supplying an extra factor
3+1 or 3-1, with these 3.000...-3.999... ranges become 4.000...-4.999.. and 2.000...-2.999... respectively
The compression is as follows. The original number is multiplied by a huge scale to create number
which is at least twice longer in file length, giving very large floating point range.
Now we multiply this range by adding a 64bit scale modifier(applied to original number) which shifts the range up and down so the space of the range is now 2^64 times bigger than original.
The compression is search for Any number in that huge range which can be represented more compactly
when one of these is found(for some function like e^A) A is recorded along with scale modifier.
Since the range is enormous there are certainly some numbers which can be represented in short form as
function(x)=number_in_range.
The decoding is as follows,function(x) is runs and results number is divided by scale, then a scale modifier is applied
to get the original number, which is converted to file.

Name: FrozenAutist !BYmn6QVNCw!ARqrDGkA3TG28fp 2011-11-09 10:21

we all know that I'm going to write that in

a) Javascript, thus completely defeating the purpose of ``fast algorithms''

b) in Frozen++, aka, autism c

Name: Anonymous 2011-11-09 10:22

This is retarded.
Nice copipe, bro.

Name: Anonymous 2011-11-09 10:34

>>2

c) FrozenScript, aka, autism js

Name: Anonymous 2011-11-09 10:56

lawl infinte compression is cool but what is the data loss:?

btw mantissa space is not infinte, hell not even accurate so all these arithmetic compressions based on math are crazy uselss

my idea for infinte compression:
code everything as 0 or 1, on average you only lose 50% of the data!

Name: sage 2011-11-09 10:57

>>5
also sage in all fields

Name: FrozenVoid !!mJCwdV5J0Xy2A21 2011-11-09 11:01

The program will be written in C.
In fact have a couple of prototypes, but i can't find a good search function that can be fast enough to search the space and at same time provide enough numbers to fit in the range(this can fixed by expanding scale modifier but that would in turn bloat up the search space, making the search exponentially slower).

Name: FrozenVoid !!mJCwdV5J0Xy2A21 2011-11-09 11:04

>>5
There is no data loss. some files with some functions  will not be able to be compressed if there is no numbers that describe the compressed range for given function.

Name: Anonymous 2011-11-09 11:14

>>8
well the fuck yo' mean 'infinte'
an infite compression ratio would imply that you can get something from nothing, i.e. encoded data has length '0'
this is fucking impossible even for autistic math

also this is 'kinda' like range encoding with floating point numbers

Name: Anonymous 2011-11-09 11:42

Infinite compression implies 1 GB of data is compressed to 1 byte.

There are 4.26 × 10^2525222 different possible gigabytes of data.

There are only 256 different possible bytes.

Explain how you will map 256 values to 4.26 × 10^2525222 values losslessly.

Name: FrozenVoid !!mJCwdV5J0Xy2A21 2011-11-09 11:47

>>10
>Infinite compression implies
I don't imply anything, read >>1 to see what it describes.
a function:(stored as few bytes describing the function number if there multiple functions used)
a 8 bytes long long : scale modifier  to alter range for original number(could be larger e.g. 64 bytes expand search space * (2^8)^64)
a parameter to function: can be several doubles, long longs or floats. about 20-30 bytes max
Thats the best result you can achieve(if the file compresses successfully)

Name: Anonymous 2011-11-09 12:27

>>11
encode "OP is a fag"
show teh relvant 'steps'
and show the compression ratio/factor

Else op confrimed for 'all fur-coat, no knickers, bitches!'

Name: Anonymous 2011-11-09 12:55

You only need 0 bits to compress all possible data and maybe even everything in existence, including yourself (all natural numbers).
However, finding anything you want within such a large amount of data may be unfeasible as it'd be like searching a particular, special small needle in an infinite haystack of small needles.
The next question would be, what would be the shortest program that can generate my data? You'll need to learn about Algorithmic Information Theory and Kolmogorov Complexity to answer that question. What about compressing any data? If we could pick any number at random, we'd expect a good deal of numbers to be Kolmogorov random (no algorithm shorter than the data), however there is the question if we can truly perform this 'pick'. If your random number is generated by a pseudo-random-number-generator, it is computable, thus it will be very compressible. If you random number is generated by, let's say: quantum noise, then it depends on a number of factors, for example, which interpretation of quantum mechanics is the real one. In the case the world is fully computable (PRNGs being used where we assume true randomness), then everything is very easy to compress. In the case some variant of MWI is true (what I consider to be the case, first because when considered independently it's the one that makes most sense, it also is the most probable when considered in the light of various formalizations of Occam's Razor; secondly, my current (with high enough, but not absolute confidence) metaphysical beliefs include Arithmetical Realism and a form of mathematical monism, which essentially gives rise to MWI-like quantum mechanics way too easily when the Anthropic Principle is considered over them, while making most other interpretations highly unlikely), quantum noise will be less likely to be compressable (by a computable function, or Kolmogorov random).
If you don't share my opinion on the nature of quantum mechanics, there is a simpler way to attain the same effect of uncompressability. Consider this thought experiment: You exist at time t0 here in some room, at time t0+delta_t, n (choose a sufficiently large n) identical (be it subparticle-scale, or merely just identical quantum states) copies of yourself and the room you are in are made (they can be placed in various random spatial locations, this part is irrelevant, as long as the locations don't overlap), and the room is altered by adding an index which is visible to each copy, corresponding to the number of the copy (your initial room has number 0 as the index). You are now asked: What number do 'you' see as the index? This assumes that all copies are conscious and philosophical zombies are likely nonsense (please don't consider souls or silly stuff). If you must insist of crazy metaphysics (souls, zombies, etc), then consider a computer or some other digital device instead of yourself and ask yourself from the perspective of the computer, what index does it see?
The obvious answer is that it would seen any number from 0 to n and this number would be truly random (experience is not shared, physical states are consistent with mental states). In a more simplified, and more joking form it would be like looking at the set of natural numbers (or a finite subset of them) and asking those numbers what would the probability of being one of those numbers be (from the perspective of each number, it's 1, but from your 'god'/birds-eye perspective it's 1/n). Anthropic reasoning can be confusing at times.


tl;dr: Computable data is likely to be compressible. Uncomputable or truly random data is likely to not be compressible. "Infinite" compression (0 bit) may be achieved, but that doesn't mean you'll be able to find any data in the infinite mess that it generated (all possible data can be found in a normal number, such as the square root of 2, or the set of natural numbers and so on, which can be computed, even if the algorithm may never halt).

Immediately relevant reading:
"An Introduction to Kolmogorov Complexity and Its Applications" by Li M., Vitanyi P.

Very interesting reading for the more philosophically inclined:
http://www.hpcoders.com.au/nothing.html
http://arxiv.org/abs/0912.5434


(Wait, why am I responding seriously to this reposted thread? /autism)

Name: >>13 2011-11-09 13:22

Oops, forgot to include 2 more papers that are relevant at the end:

Marchal B., 2001, Computation, Consciousness and the Quantum, in Teorie & Modelli, n.s., VI, 1, 2001, pp. 29-43. Special issue on the History and Problems of Quantum Interpretations of Consciousness, ed. by M. Battacchi, V. Fano. (with the kind permission of V. Fano).
"Law Without Law" by John Archibald Wheeler

Name: FrozenVoid !!mJCwdV5J0Xy2A21 2011-11-09 13:32

>>13
1. some data may not be compressible using this technique(it depends on function used: N inputs X scale modifiers will yield X*N files)
2. the idea is the File->Number->Interval is not RANDOM, it makes random numbers into a range.
I'll explain that process more:
Scale Modifier=S(exact S is found for given function input: S expands search space by making the number more "fuzzy")
a Number Z is made into a range by substracting and adding scale modifiers:
Z-S=Low, Z+S=Hi  Range= Low<>Hi
By multiplying Z by Scale:K the the Range is multiplied Low*K > search space < Hi*K
Any number in the search space is not Random, that is huge chunk of number line
Functions which reach into that search space will produce Input:I which expands into  Low*K > func(I) result< Hi*K
By dividing result/K= we get into original range Z-S > original range <Z+S, now we apply +-S to get Z.
Z is now converted to file.

Name: FrozenVoid !!mJCwdV5J0Xy2A21 2011-11-09 13:40

Z-S=Low, Z+S=Hi  Range= Low<>Hi
S is meant to be maximum S(which defines the ranges max reach, not the final S(which is smaller))

Name: Anonymous 2011-11-09 15:21

>>10
python!

Name: HAXUS THE Storage ADmin 2011-11-09 19:40

Infinite compression implies 1 GB of data is compressed to 1 byte.

need to know how to do this for the autists at work that generate multigiga byte render files in AE and generally clog up the file server with their shit.

Name: FrozenVoid !!mJCwdV5J0Xy2A21 2011-11-09 20:26

Note that K itself can be modified with a scale modifier for search windows:
hypothethical example:
/*
 BASE: "1.0000000000000000000000000000000000001"
Power:>0
01=fixed start byte
z=[0x01]+[1023 bytes data]=
z=1024 bytes, S=500 bytes(diff),K=1024*16 bytes(power of 2)
Z-S=low
Z+S=hi
low*K=LowBound
Hi*K=HiBound
To shift window for search:
Kmod=256 bytes(modify to Low*(K-Kmod) <>hi*(K+Kmod)
final:record KMod+ScaleMod+Power<filesize
(Z-S)*(K-Kmod)<search range>(Z+S)*(K+Kmod)


*/

Name: Anonymous 2011-11-09 20:40

Why haven't you killed yourself yet?

Name: FrozenVoid !!mJCwdV5J0Xy2A21 2011-11-09 21:00

K itself can be dynamic, like e^x so seacrch space could be
(Z-S)*(e^x)<>(Z+S)*(e^x) where x is the power of E recorded with ScaleMod

Name: Anonymous 2011-11-09 21:04

Name: Anonymous 2011-11-09 21:19

>>22
Don't bother, YABT. You don't need lots of words to understand the pigeonhole principle or equivalently answer such simple questions as "how many different files can possibly be represented with n bits"

Name: FrozenVoid !!mJCwdV5J0Xy2A21 2011-11-09 21:25

fib series:1...(there are slower growth series ;this is exmaple)
shift window:K=E^x:
(Z-s)*K<fib(n)> (Z+s)*K
the idea is the by shifting the window we find a fib(n) which lies inside the bounds.
Now we record x for e^x and s for Z

Name: matth@eecs.berkeley.edu !!mEFVYLkcRFWjjY+ 2011-11-09 21:51

>>24
That is stupid. Seriously. Can you do more than google shit you fucking idiot?

Name: FrozenVoid !!mJCwdV5J0Xy2A21 2011-11-09 22:42

>>23
n bits can represent 2^n files for sufficient n the 2^n becomes huge enough to contain all the data in the universe(but not the entire set of "possible files" which is near-infinite).
128 bits could be 2^128 files.

Name: Anonymous 2011-11-09 22:59

>>26

that depends on what all the data in the universe is. For a file to represent the state of the universe, it would need to represent the state of each component of the universe. If there are k components of the universe (maybe atoms or some other basic unit) and suppose each component must have one of j states (assuming the states are discrete), then the universe as a whole can have up to kj different states. Each state of teh universe must correspond to a state of a file representing the universe. So basically, the number of bits needed for the file is linearly proportional to the "size" of the universe. You could use log2(k) bits to address the components, but you'll need k * log2(j) bits to represent the states of all of the components.

Name: FrozenVoid !!mJCwdV5J0Xy2A21 2011-11-09 23:51

>a state of a file representing the universe
I don't mean that, i mean real files that can be written. 1mbit can hold 2^1000000 possible files which are probably impossible to store all at once, but one 1mbit of space can address any of them.

Name: Anonymous 2011-11-10 1:11

>>28

But each file is 1 mbit in size... The address is essentially the exact same thing as the file.

Name: FrozenVoid !!mJCwdV5J0Xy2A21 2011-11-10 1:25

>>29
Each file could the entire universe or 1 bit. the 1mbit space is only the address(a function parameter)

Name: Anonymous 2011-11-10 1:32

>>30

oh alright, so you are just enumerating the set of files, and then addressing them with the smallest address needed.

Name: FrozenVoid !!mJCwdV5J0Xy2A21 2011-11-10 3:41

The people thinking pigeongole principle has any relevance to range search:
1. the number of possible files which can be encoded is proportional to ScaleMod and Kmod is much smaller than the set of all possible files.
2. the most of the "possible file" are random garbage, which you don't want to compress anyway like in >>13 so its best to concentrate on
files which can be compressed with the range search.
3. the idea is not limited to a single search function(e.g. fibs) and the scales/Kmods/Scalemods can be generated by functions themself(compactly or storing at most 1/4 file length in parameters).

Name: Anonymous 2011-11-10 5:35

Hey FrozenVoid, I just want to say I love you and your posts so much. Maybe code something like the embryo system in nature, i.e. you have a small seed file which you then grow to the uncompressed file, using some sort of CA ruleset that maybe modifies itself while it's being iterated? I know this is possible because an entire human can be compressed into about 3 MB (just the set of chromosomes!)

You can do it! You shine brighter than any of these narrow-minded faggots! ^___^

Name: Anonymous 2011-11-10 6:10

>>13
Thanks for the post and the links. Will look inside shit threads more often.

Name: FrozenVoid !!mJCwdV5J0Xy2A21 2011-11-10 6:36

I'm thinking of writing a real compressor but i don't have any motivation(its boring and has very low effort/reward ratio).
If any wants to ask technical details, you have to write it yourself.

Name: Anonymous 2011-11-10 6:39

INFINITE COMPRESSION IS IMPOSSIBLE

Name: FrozenVoid !!mJCwdV5J0Xy2A21 2011-11-10 6:43

>>36
Impossible is nothing. You define the "infinite compression" as some arcane magic formula which makes everything smaller.
This is more of "search algorithm", which is successful writes a number in way which is very compact.

Name: Anonymous 2011-11-10 7:02

>>35
Could you post pseudocode for what it expected to do?

Name: Anonymous 2011-11-10 7:02

>>1
So if 3*10 is the range 30.0000000... to 39.99999999..., then what integer represents the range 30.0000000... to 30.99999999...?

Name: FrozenVoid !!mJCwdV5J0Xy2A21 2011-11-10 7:05

>>38
openfile(name)
split file into 1023byte chunks
generate max S(Scalemod_max) which is about 500 bytes in length. 2^m
for each chunk{creat buf=1024 bytes
buf[0]='0x01' to prevent leading null byte
add chunk to buf
convert buf-> integerX
X-S= low bound 
X+S =High bound
multiply both bounds by K= 2^65535
search numbers inside new bounds
which Res= (1+(1e-100))^y= between low and hi
record y, divide Res/K=D,
record difference between X and D as  Scalemod(Scalemod<S (max)),
write result as chunk into new file, repeat next chunk
}

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List