I've always was interested in how compressed files worked and why the compression factor is so low.
The entropy explanation isn't something i would accept without tinkering with the data. The idea of my compression algorithm(currently under development) is to
use algebraic properties of files to express the data in more concise way.
The thing might sound trivial, but its implementation is not.
Normal compression works by splitting FILE and finding redundant pieces to express them in minimum volume.
Imagine a FILE read and converted to arbitrary precision integer. Now consider all the myriad ways to generate said integer. Sounds difficult? Not so much.
1.lets take a Prime Number,e.g. 2 and raise it to power closest to our integer, e.g. 20000. Note the difference from the (FILE-result).
2.Get the smallest difference with the powers available,
and proceed to next step:
3.If the difference is negative: find 2 raised to the power
of X which would close the gap,by substracting it from #1
If the difference is positive just add 2 with closest power to the difference .
The end result could be something like
2^6+2^5-2^3+2^1-2^0=Integer=FILE
Its can be improved further by using other prime numbers powers with the idea of expression taking the least space.
The same thing can be accomplished with arbitrary length floating point numbers like 2^123712.1282 which would converge faster,but will require some fixing to convert to a stable binary result.
Posted by FrozenVoid at 15:37
Name:
FrozenVoid2009-06-22 11:09
I guess i have to work around MAPM idiosyncrasies somewhow, since its the only one working.
Progress is looking good! I'd guess five or six more years before revelation sinks in, except you'll give up due to "language quirks" long before then.
Hello FrozenVoid
I'm interested in using your compression scheme as the basis for my thesis on data compression and source coding. I would appreciate it if you could post some statistics regarding compression rates and compression times, as well as benchmarks comparing your scheme to other algorithms such as Huffman, LZ, Arithmetic, SFE, etc.
Thank you
>>256
"Huffman, LZ, Arithmetic, SFE, etc" belong to another class of programs(i'd like to call it dictionary compression).
They compress redundant data. I 'compress' numbers(i'd call it algebraic composition).
There is no reason i couldn't drastically change e.g. my core routines to extract roots/logs instead of searching for powers
as long as they result in integers equal to files. I simply design equations.
Hey guys. I found revolutionary data compression scheme too! It happened two days ago when I JUST GOT ENLIGHTENED OH MAN! So listen what I thought of:
Everyone knows that DATA is represented by 0s and 1s yeah?
YEAH
SO
Why don't we recode DATA with smaller 0s and 1s, so many bits fit in one slot!?
FUCK YEA!!!
∧_∧
(-Ò∀Ó)
Name:
Anonymous2009-06-23 19:38
>>267
That is just the basics of how compression works, youo will need to actually programmed it though I hope this is clear
∧_∧
(-Ò∀Ó)
Name:
Anonymous2009-06-23 20:11
>>268
NopeI can't. I have an ASD_OUT_OF_DATAZ0R eror or smoething ┐(´ー`)┌
Name:
Anonymous2009-06-23 20:14
>>269
But I'm considering learning Hawaii++ oh wel