Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon.

Pages: 1-

disprove info theory

Name: Anonopolis 2011-08-29 5:39

Conversation that took place between me and him. This was after a few months of him vaguely mentioning a project he had been working on.

Cameron Kenworthy
But I just disproved Shannon's information THEORY!!! AND YOU ARE THE ONLY PERSON WHO IS SMART ENOUGH TO UNDERSTANTD!

ME:
Hmm
I've never studied the theory before
but I just looked at the Wikipedia article and it doesn't even have a controversial section

MY FRIEND:
It's really simple I'll explain

ME:
proceed

MY FRIEND:
I KNOW!
Shannon states that there is a limit of lossless compression of a message
this is referred to in his equations as H, or entropy rate
He also states that you cannon consistently compress any message do to the fact that the data making up the message is random
In its initial state a message (like a file) has an entropy value, the total amount of information held in each character of the file
for computers it's 1 bit per character (binary duh)
The total entropy for the compressed message is equal to the initial message, therefore the entropy for each character is larger in the compressed message than the inital, but still limited by Shannon's formulas
But imagine being able to generate a seed based on any set of data (fucking impossibly difficult and convoluted I know, but bear with me) that could be used to regenerate the data, each character within that seed would have well past 1000 times Shannon's limit of entropy rate for a standard 15 kB picture!

MY FRIEND:
I refuse to tell you how I did it at this point but I devised a set of algorithms to derive a seed for any data set, from 100 bits to 100 trillion terabytes, therefore I proved that Shannon's source code theorem defining the limit of entropy rate is wrong, which redefines the past 60 years of information theory

MY FRIEND:
and the computer as we know it

ME:
and this seed is... small?

MY FRIEND:
yes, its like a seed for generating a minecraft world, its a few characters that creates a billion

ME:
but the billions it creates is random
er... pseudorandom

MY FRIEND:
yes but it is consistent and defined by the seed
so the seed will generate the same thing every time

ME:
Hmm...
Very interesting
How long does it take to generate the data from a seed?

MY FRIEND:
therefore, if you can go backwards, and start with a billion sized value and then create it's seed you break the entropy limit
exactly the amount of time it takes to make the seed from the data
because you use the inverse of the formulas for decompression that you used for compression

ME:
Genius

MY FRIEND:
*algorithms, not formulas
I KNOW!

ME:
but...
Does it work?
I mean, have you tested those algorithms?

MY FRIEND:
I didn't realize that its exactly what I've been doing for the past 3 months
Yes, it works!
Excuse me for withholding the trillion dollar explanation though
don't worry, I'm writing down all of my smart friends that I will pay their college fees and give them jobs

ME:
That's very benevolent of you

MY FRIEND:
agreed

ME:
Can you show me an example?

MY FRIEND:
Hmmm.... Well, I can compress the film american beauty to 54 bits, and then decompress it

ME:
I think the data compression and decompression would be very slow for a whole movie, wouldn't it?

MY FRIEND:
I could probably get it smaller than that actually
Nope
a blink of an eye

ME:
But...

MY FRIEND:
... maybe two blinks

ME:
But a whole movie would be millions of numbers

MY FRIEND:
billions actually

ME:
the seed would have to do a bunch of calculations to get each number

MY FRIEND:
haha, and therein lies my secret
just think about fractals
And that's all I'

ME:
fractals...

MY FRIEND:
ll give away
Mandelbrot has outdone himself from the grave

ME:
one small pattern, turning into a large uniform one
hmm

MY FRIEND:
exactly

ME:
Ok, I'm just going to look at this logically

MY FRIEND:
go ahead

ME:
There are infinite ammount of possible sets of data

MY FRIEND:
correct, as infinite as its number system that is

ME:
You would need a unique seed for each set

MY FRIEND:
correct
don't worry, there are limits to the seed, but they far exceed (trillions of trillions of trillions of times) Shannon's limits

ME:
That means there are as many seeds as there are sets of data

MY FRIEND:
correct, infinite amounts

ME:
Let's say you wanted to compress the letters "A", and let's just say the seed for that is '1'
then "B" would be '2'

MY FRIEND:
I'm kind of leaving out a key concept here that would make it way simpler for you to understand, sorry about that
but computers cant think in 2

ME:
then "B" would be "10"

MY FRIEND:
2 is the same size as B to a computer

ME:
anyway
yes
yes it is
Compression usually works by shortening of duplicate patterns
But... but... I don't get it!

MY FRIEND:
yes but the only way a computer can read binary is by knowing that all the characters are exactly the same lenght, or else it doesn't know which 1 does what
You will, give me a few months, Dan and I need to pay for legal fees and what not

ME:
Right, thanks
I wouldn't trust me either

MY FRIEND:
I just needed to shit all over Claude E. Shannon's life work ASAP

Name: =+=*=F=R=O=Z=E=N==V=O=I=D=*=+= !frozEn/KIg 2011-08-29 5:43

"My friend", yeah.

Also, fuck off, I have a priority claim on the infinite compression.
_____________________________________________
http://bayimg.com/image/iafhbaacj.jpg
orbis terrarum delenda est
http://encyclopediadramatica.com/Portal:Furfaggotry Furry Drama Encyclopedia

Name: Anonymous 2011-08-29 5:44

Which Touhou would you compress data with?

Name: anonopolis 2011-08-29 5:51

>>2
Holy shit you do. Except that isn't what he is talking about :P

Name: Anonymous 2011-08-29 6:07

>>3
your mom

Name: Anonymous 2011-08-29 8:05

>>4
I think it is. Sounds exactly like the real FV's retarded compression. >>2 isn't the real FV of course.

Name: Anonymous 2011-08-29 12:12

i think your friend is retarded, no offense.

Now HERE's something that'll blow your mind. If space is continuum the position of an atom can be expressed with infinite precision (0.129387126528423...). Therefore, by placing an atom in a precise position, we can store all the information in the universe, ever.
Any atom can contain all the information in the universe.

Name: Anonymous 2011-08-29 12:19

>>7
Therefore, space is obviously discrete. Assuming it is continuous, how would you sample the atom's position?

Name: Anonymous 2011-08-29 12:33

>>7
What is this frozenvoidesque shit?

Name: Anonymous 2011-08-29 13:05

>>8
that's not my problem, it's a purely theoretical thing.
It's obviously unfeasable, you should use something without charge and as distant as possible from anything else, or in a shielded space were you eliminated all forces (or forces are equal in all directions).
Then you should find some macroscopic event that is highly dependent on the position of the atom.
Position is not the only option, any continuus quantity would work, velocity, energy...
i'm ready for my nobel now

Name: Anonymous 2011-08-29 13:09

>>10
Once you have that atom you can just store its state in a bit.  Then it'll be no trouble to read back.

Name: Anonymous 2011-08-29 13:24

>>7
Fortunately, space-time is not a continuum, it is discrete as Zeno's paradox can illustrate to any third-grader. Space-time is quantized with maximum entropic-resolution being that of the plank unit.

Name: Anonymous 2011-08-29 13:25

>>12
I meant ``Planck Unit''.

Name: Anonymous 2011-08-29 13:45

>>12
Fortunately, space-time is not a continuum, it is discrete as Zeno's paradox can illustrate to any third-grader.
If you didn't take to huffing glue and consequently dropped out in seventh grade, they would have told you about convergent series. Just sayin.

Name: Anonymous 2011-08-29 13:53

http://en.wikipedia.org/wiki/Pigeonhole_principle

Required reading for anyone who claims to have invented an awesome new compression algorithm.

Name: Anonymous 2011-08-29 14:30

Claude Shannon Tweed

Name: Anonymous 2011-08-29 15:17

>>7
That's the problem with space being a continuum (if it is, which I doubt it); if it were hypercomputation would be possible you could implement AIXI (not AIXItl), or run all computable universes using a single particle or even emulate your own universe recursively.
Some fiction about this idea: http://qntm.org/responsibility

Personally I'm an Occam's razor user, which means I prefer a discrete computational multiverse (Ultimate Dovetailer; Computational Multiverse Cardinality = Aleph Null(ℵ0) if arithmetic is consistent), ?(less) if not, to one which allows hypercomputation (complete Mathematical Universe Hypothesis, which includes more universes than UD (1or greater), just one hypercomputational universe would imply infinite amounts of hypercomputational oracles which can perform uncomputable operations).

The more difficult question is: is transfinite induction ontologically correct higher than ℵ0? Is it even correct up to ℵ0 (could be asked: is arithmetic consistent)?

Name: Anonymous 2011-08-29 15:45

>>1
Looks like someone has just discovered pseudorandom numbers.

Yes, you can generate arbitrarily long sequences that look like random data.

BUT there are a limited number of those.

Take for example some 4GB sequence you generated from a 32-bit number, via something like a linear congruential generator. Now you can claim to represent that 4GB of "random data" as one 32-bit number.

Then change only ONE bit in those 32 billion.

How would you compress that now?

You can encode any stream of data as a seed + a set of deltas. those deltas are where the "extra information" went. And they can be arbitrarily large, up to the size of the data itself.

Name: >>17 2011-08-29 15:46

Also, >>1
To represent all possible computational universes you only need 0 bits, which means you could find all possible data within them (or just use a normal number, such as square root of 2 or pi: you'll be able to find ANY data in their full infinite expansion). However, how would you find the data without knowing its address? Hence to encode some data you need to encode its address, which may very well be as large as the data itself (worst-case scenario). Best case scenario is the smallest program which can generate your data (this is close to what AIXI does using Solomonoff induction), except the method for finding this program is uncomputable (a computable variant will just leave you finding smaller versions, but you'll never know for sure that it's the smallest).

Relevant links to the second part:
http://arxiv.org/abs/0912.5434
http://arxiv.org/abs/quant-ph/0011122
http://www.hutter1.net/ai/aixigentle.htm

Don't change these.
Name: Email:
Entire Thread Thread List