Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon.

Pages: 1-

Hypercomputation

Name: Anonymous 2012-03-03 17:07

Possible or not?

Name: Anonymous 2012-03-03 17:11

Hyperoperators

Name: jew 2012-03-03 17:12

hyperpooping

possible or not

Name: Anonymous 2012-03-03 17:21

>>2
What do you mean? You mean "hyperoperators" allow for hypercomputation? What do you mean by "hyperoperators"?
In case you mean hyperoperations, I have no idea how to define things like Knuth's up-arrow notation for non-integer/negative/complex amount of arrows; could you explain me how?
Also, I read about Schröder's equation which is supposedly a way to find the superfunction of any function, but I do not understand all the symbols and notations because I am a NEET.

Name: 1,4 2012-03-03 17:35

Also, how do I get into Zeno automata / scale-invariant cellular automata? Is consciousness capable of hypercomputation because it can transcend itself? (As far as I know, non-recursive Turing machines are practically never self-transcendental due to stupid naturally-occurring things like computational irreducibility, gambler's ruin and the butterfly effect.)

Name: Anonymous 2012-03-03 19:00

Hyperoperator Megaoverloading.

Name: Anonymous 2012-03-04 0:32

Unlikely, in its usual definitions, but I suppose some speculative theories do allow for it. I don't see how one could even recognize hypercomputation if it was staring them in the face - if we are Turing-emulable, we can't truly know if there are systems capable of operating with concrete infinities, at best we can theorize, but never know experimentally (we could also always be described by a computational, finite truncation of them at any given time, although never in full).
However, don't let that bother you, there are plenty of ways to achieve "unlimited" computational resources without having to violate the Church-Turing Thesis.
>>5
I really doubt consciousness and self-consciousness (different things) require hypercomputation. In some computationalist theory of mind (non-materialist ones; the materialist ones are forced to eliminate consciousness away as a delusion), consciousness is merely awareness that goes on at certain abstract structures contained within a lower-level computation, while self-awareness is a more specific structure/process of which one can be conscious of (and can be formalized mathematically).

However, if such a theory of mind predicts too many unusual, unstable experiences (which we don't have), we might have to assume some non-computationalist theories of mind which involve hypercomputation, however most of them seem utterly unplausible and evne deciding on a theory is difficult - the math everyone can agree on includes computational universality, but when you bring in concrete infinities, the theories start diverging greatly and there's no way for us to even know which one would be more likely, not to mention that there's zero evidence for any of them being required by any current theory of mind.

We are self-overcoming because we can change our beliefs, however that doesn't make us that rational, merely allows us the chance of being more correct the more we can update them and the better heuristics we choose to use.

Name: Anonymous 2012-03-04 12:11

>>7
there are plenty of ways to achieve "unlimited" computational resources
Kindly explain what are these ways?

Name: Anonymous 2012-03-04 13:02

It depends on what ``we'' and our ``universe'' happen to be.
In the materialist viewpoint, there's just the universe and that's axiomatic - you don't ask what it is.
There is another popular viewpoint - the universe is just math viewed from the inside - you can see this in the writing of Max Tegmark, Schmidhuber, Bruno Marchal and so on.
Now the question becomes, "which math?" and "what is consciousness?"
Tegmark initially considers all consistent math, but that has its own problems, so his current position is limited to only computable ones. Schmidhuber considers an Universal Dovetailer, or the UD (an Universal Turing Machine running all programs by using a scheduler/interlacer, starting with step 0 of program 0 at tick 0, step 1 of program 0 and step 0 of program 1 at tick 1 and so on). Marchal takes the assumption that the mind is computable (computationalism), and also makes use of Schmidhuber's UD, but unlike Schmidhuber which attaches consciousness to some specific body in some some computable universe's run, it asks the question of what subjective experience would such a simulated being have while being within the UD - the result turns out to be that assuming computationalism, you very easily end up with a Platonic view of reality where all the possible experiences (and apparent physical worlds) of a computable mind end up within the UD with some measure and nothing more or less has to be used to explain the entirety of physics (and most of mind) - an interesting side-effect is that you get something very much like the quantum indeterminacy in QM (explicitly as MWI) - and this is for all possible worlds within such an ontology - it comes with being the observer. I'll refer to this particular ontology as COMP from here on (computationalism with (subjective) mind, as opposed to eliminative materialism which is a computationalism with mind-as-self-delusion).

Aside from these 3 more radical views on ontology, I also ask you to consider the simplest possible (by Occam or by some formalized complexity measure) interpretation of Quantum Mechanics - it's the MWI(Many Worlds Interpretation).

Now given these possibilities for our reality, we have a few ways of attaining suh "unlimited" computational resources:

1) If COMP, you can easily run simulations including yourself which continue running "in the dust" or platonically (to be more correct) - as described in Permutation City (the simulation example in that particular book won't be stable within such an ontology, but there are ways of fixing this problem, but that's outside the scope of this response)
2) Regardless of theory (still computationalist though) - http://en.wikipedia.org/wiki/Dyson's_eternal_intelligence
- not very practical, but might work.
3) If computationalism is false and we have concrete infinities in our mind's implementation or in physics - you have hypercomputation - unlimited computational resources right there (if you happen to be able to exploit them somehow).
4) If computationalism is true, a restatement of 1 - you are your mind's informational/computational pattern, if you can find yourself in any other structure, be it some Tegmark Level 4 Universe, somewhere in the UD, or a different MWI branch. I already explained the first 2 in example 1, so I'll explain the latter one here. Consider running a random program seeded from quantum noise (assuming MWI), you can have all programs up to some bound run in the different branches starting at some point (this is trivial to code, although for this to be of any use, you'd have to have enough resources to run enough programs).
Imagine you run this program, it crashes and you forget about it. You're now 100 years later a simulation. You die. You end up with a continuation 100 years ago in some branch. Repeat as many times as you want. (This obviously neglects that if COMP is true, you will have far more diverging continuations, but if COMP is ignored and only MWI is considered, this would work in a stable manner).

There's plenty of other tricks one can think of, but these are sufficient for now and some of them might scare someone quite a bit, after all, this view of computationalism implies a pretty heavy form of immortality, much more random than that of MWI quantum immortality. On the upside, you might at least get to compute/simulate anything you could ever want if you're lucky and you plan your things right.

References:

Tegmark:
http://arxiv.org/abs/gr-qc/9704009
http://arxiv.org/abs/0704.0646
http://arxiv.org/abs/physics/0510188
Schmidhuber:
http://arxiv.org/abs/quant-ph/0011122
Others:
http://arxiv.org/abs/0912.5434
http://www.hpcoders.com.au/nothing.html
Permutation City:
http://gregegan.customer.netspace.net.au/PERMUTATION/Permutation.html
Brain Marchal's COMP;
http://iridia.ulb.ac.be/~marchal/publications/SANE2004MARCHALAbstract.html
MWI tricks:
http://www.paul-almond.com/ManyWorldsAssistedMindUploading.htm
Worst case scenario in a single universe:
http://en.wikipedia.org/wiki/Dyson's_eternal_intelligence

Name: >>9 2012-03-04 13:03

That was in response to >>8

Name: Anonymous 2012-03-04 18:10

HAA HAA DUBS HEE HEE HOO HOO

Name: Anonymous 2012-03-05 14:12

>>9
ok, so how do i scan the cosmic index? where can i even find the cosmic index? how do i predict the future? how do i travel through time? i am a virgin to all this stuff so explain me.

Name: 12 2012-03-05 14:13

i am not going to get the answer that easily, am i?

Name: Anonymous 2012-03-05 14:25

>>9
Consciousness can perceive time in different ways. So I thought about coding an app that perceives time retroactively. The problem is how the hell do I control what an app "perceives"? How are the conscious patterns found? I assume you are the same dude I talked to about a month or so ago who said that "consciousness finds itself". Does this mean that only the consciousness itself can perceive itself, i.e. I cannot do any calculations or computations of the consciousness of someone else?

Name: 14 2012-03-05 14:31

It seems all I need is to make an application that can perceive itself, i.e. has an "ego" of some kind.
Feedback loops are a way, but it kind of defeats the entire purpose of coding a self-perceptron! Here's why: If I want the application to perceive backwards in time, the temporal regresses (possibly hypercomputational) are MY burden and I have to calculate them myself.
So, how the hell do I do this shit?

Name: Anonymous 2012-03-05 14:34

>>12
ok, so how do i scan the cosmic index?
I don't understand what you mean by this. Is this like the Godel number of some particular "universe" in the UD? I suppose you would have to run a modified UD which just scans for environment data you've captured, but I don't think that would always work as a lot of it might be algorithmically incompressible.
how do i predict the future?
You can never predict the future by running the simulation of the same universe as the one you're in (if you knew a program that runs it up to present), either because the universe has finite resources or because you're part of it. However, it can be done from the outside. Let me be more explicit here:
- Program B("you") is emulated by Program A("some machine in the UD")
- Program B cannot compute Program A to a step that was not already computed. To put this more simply, if you were to run a simulation of some computable universe, you would never be able to simulate past the present moment while being part of said computable universe.
Solution? Just get Program B emulated by another program different from A. ( In >>9 that's explained in point 1, read that book for a thought experiment that shows how this could be possible. )

Of course, this doing something like this kind of defeats the point of prediction doing a prediction (running the computation) as you might no longer need the result, so for practical purposes (while being part of "Program A"), just use some form of induction (like in most sciences) or an AI like AIXItl or something more resource efficient. The human mind happens to be a resource limited general intelligence pretty damn good at prediction by inducing data from its inputs.

how do i travel through time?
Not in any trivial way, but if you can perform step 1 in >>9, all you have to do is insert yourself in a simulation at the right state. This idea seems to have been explored by some in fiction as well: http://www.fanfiction.net/s/5389450/1/The_Finale_of_the_Ultimate_Meta_Mega_Crossover.
Why would you want to travel through time though if you can just simulate anything you'd want to find out without having to expose yourself to any risks?

Name: Anonymous 2012-03-05 14:46

>>16
>so for practical purposes (while being part of "Program A"), just use some form of induction (like in most sciences) or an AI like AIXItl or something more resource efficient.

In practice I want to predict cellular automata without having to actually run the simulation, or just find a way of accelerating them infinitely.
In the book "A new kind of science", Wolfram talks about "computational irreducibility", i.e. that there might in fact be no way to predict the behavior of these automata without running them. Is he right, or just a defeatist?
Why would you want to travel through time though if you can just simulate anything you'd want to find out without having to expose yourself to any risks?
Because simulations are insanely slow. I don't want to wait 100 years just to get a result. That's why I asked about the prediction thing.

Name: >>7,9,16 2012-03-05 14:54

>>14
Consciousness can perceive time in different ways. So I thought about coding an app that perceives time retroactively.
If you have a conscious program, all you have to do is slow or speed it up. How can you get unbounded speedup? Speedup is always relative to something. If you can slow yourself down with respect to some perceived "physics", that is the same as the physics being faster. What happens if you run a VR(Virtual Reality) and a SIM(Substrate Independent Mind) on the same VM(Virtual Machine, in this case I assume it runs on a true Turing Machine, not merely a Finite State Automata), but allow either to adjust their priority? The SIM can just increase its priority unboundedly thus it can find itself growing as fast (or as slow) with respect to the VR. The VM's speed or implementation shouldn't matter as well - subjective notion of time and physical notion of time are very different.

The problem is how the hell do I control what an app "perceives"? How are the conscious patterns found?
That's a hard problem. You could say it's the input data, but that view would be incomplete - we don't perceive our input as it is, but merely processed/compressed versions of it. The input that we get from our senses is very noisy, but we almost always have a crystal-clear perception/model-of-the-world in our mind. Some reading that I like on this subject is found in Hawins' "On Intelligence" book, feel free to read it. The problem is close to that of building Aritificial General Intelligence, although it only applies to some such systems, not to all of them (some AGI researchers still seem to dream of making unembodied AIs which don't learn from the environment, but I don't think that's doable from scratch - you need to have knowledge from somewhere and feeding knowledge by hand is unrealistic if you want truly general intelligence).
On the subject of pattern recognition, there are a variety of interesting books/papers on how to induce function/patterns from data, I could look up some of them up if you're interested, although I don't have them here right now.
Some interesting videos on the subject that you might like can be found here: http://agi-school.org
I assume you are the same dude I talked to about a month or so ago who said that "consciousness finds itself".
I think it was me.
Does this mean that only the consciousness itself can perceive itself, i.e. I cannot do any calculations or computations of the consciousness of someone else?
You can run a program that could act intelligently with regards to you and likely be conscious as well. You cannot literally experience it, although I do think you can translate someone's experiences to ones you can perceive if you have a program trace (sometimes, not generally).
Here's a tentative example: http://sites.google.com/site/gallantlabucb/publications/nishimoto-et-al-2011
Of course, if you have no way of translating some experience, I don't see how you could have it without modifying one's cognitive architecture.
Also, it's possible to have programs that cannot be reverse engineered (in theory) in any trivial manner ( http://en.wikipedia.org/wiki/Homomorphic_encryption) or that have properties you don't know about ( http://en.wikipedia.org/wiki/Rice's_theorem ), thus in a way, only the program truly knows what it's like to be itself and you can at best guess.
>>15
Sure, that's one way to go about it. Try taking a look at OpenCog or some of Ben Goertzel's work, some of it should match your criteria. Also read that book I mentioned earlier.

Name: >>18 2012-03-05 15:02

>>17
In practice I want to predict cellular automata without having to actually run the simulation, or just find a way of accelerating them infinitely.
Partially answered in >>18. Since CAs can be Turing-equivalent, you run into the problem of computational irreductibility - you can't predict them without running them. It is of course a possibility of slowing yourself down (as a SIM, not a plain human) so that they run faster.

This is not to say that some programs, especially those designed by engineers to be predictable - can be predicted.

As a side-note, I've once heard of "free will" described in the sense of being unable to predict what you would do before doing it - computationally irreductible algorithms and (some) physical systems do fit this requirement.

In the book "A new kind of science", Wolfram talks about "computational irreducibility", i.e. that there might in fact be no way to predict the behavior of these automata without running them. Is he right, or just a defeatist?
Oh, I should have read this before responding. He's mostly right, but not completelely. It is possible to find models which better and better predict something. Sort of in the same way that you can solve the halting problem for specific cases if you can make certain assumptions and those assumptions are correct. This is not that different from inventing a set theory or higher transfinite forms of math where you assume infinite processes will behave some specific way. The problem with that is that you risk being wrong the more assumptions you make about the behavior of the infinite.
As math itself is inexhaustible (by Godel and all), it will almost always be possible to find a shortcut, but never generally. This is to say that science will never end, not even if the world is just math or computation!

Because simulations are insanely slow. I don't want to wait 100 years just to get a result. That's why I asked about the prediction thing.
Either you can make a better model or you can slow yourself down (only available to SIMs or AGIs). I don't see any other way out of this.

Name: >>19 2012-03-05 15:06

I should have also mentioned that if you want the perfect example of never-100% predictable program (as in computationally irreductible), consider the Kolmogorov complexity of a program or predicting what the UD would return at a step chosen at a step chosen at random or the Halting probability(Chaitin's Omega).

Name: Anonymous 2012-03-05 15:15

>>15
Missed this:
Here's why: If I want the application to perceive backwards in time, the temporal regresses (possibly hypercomputational) are MY burden and I have to calculate them myself.
I'm not sure how you can do this without calculating the future states yourself. The problem here is also that it won't be able to interact with the backwards-in-time part of the environment.
Perception itself always goes forward within some environment's time, although I suppose it would be possible to access a part of the environment which goes backwards in time, but not interact with it?
BTW, this idea was also explored in "Permutation City", but I've mentioned that novel way too many times already...

Name: Anonymous 2012-03-05 15:20

>>19,20
That's bad. I hate computational irreducibility.
It is possible to find models which better and better predict something.
The problem with these is that the closer they are to the solution, the slower they advance. The convergence is never linear (or if it is, the solution is already known).
Here's something I thought about: in quantum mechanics, it is theoretized that the results of the double slit experiment are like that because time flows both ways, i.e. the electrons on the wall were already there in the future and the point where time meets is where they are observed. I thought of applying the same concept to my simulations, i.e. try to "guess" the future AND the past and attempt to somehow abuse the birthday paradox to find a compromise between the past and the future. I tried this with reversible CAs but the problem of computational irreducibility still applies: it takes LONG for things to get to work, and the better they work, the longer it takes for them to get better. I saw no signs of the birthday paradox taking effect (time being square-rooted).

Name: Anonymous 2012-03-05 15:22

>>21
I have no problem calculating the future states. It's the problem of infinite regress when traveling backwards in time (causal loops). If I calculate them myself, I also have to define them myself (which defeats the point of trying to find generalization).

Name: Anonymous 2012-03-05 15:26

>>22
One might also ask the PvsNP question here - how far we can we reduce the time complexity of certain algorithms?

Name: Anonymous 2012-03-05 21:41

FORCED HYPER-INDENTATION OF CODE

Don't change these.
Name: Email:
Entire Thread Thread List