Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

Non-computability.

Name: Anonymous 2010-03-04 21:21

According to Roger Penrose, humans can perform non-computable feats, such as dealing with Gödel questions. He uses this as a foundation to claim that the human mind cannot be expressed in terms of classical processes, and as such must be party to the only other (known) game in town: Quantum Mechanics.

Now, I haven't had the patience to sit through all of his arguments yet, though I slowly make progress. My understanding is that a large part of his stance is that an algorithm cannot usefully deal with a Gödel question, or equivalently, with the halting problem, while a human can.

My objection to this is that such problems always demand a certain quality of response when asked of UTMs: failing to respond forever is not acceptable as correct, nor is providing any response other than one that yields a truth when taken in combination with the question. This much is fine, however, when it is time for the human to answer, he is permitted the liberty of rejecting the question on the grounds that it is inherently unanswerable.

Obviously I am interested in artificial intelligence, and also find his assertion to be simply a self-serving one with a contrived philosophical backdrop for foundation. If anyone knows of, or can think of, a more sophisticated argument than the one above (or expose my flaws in my assessment of it) I would like to hear it.

Apologies for bringing up a largely philosophical question, my only excuse is that I cannot trust any other board with the question.

Name: Anonymous 2010-03-05 23:16

>>43
This is something of a silly argument and I don't think it changes the result of the initial question.
This argument isn't silly, but that being irrelevant I'll cut to the chase: the argument is contrived. It was contrived precisely as a case for reductio ad absurdum to illustrate the halting problem. Since it does its job, it has a great effect on your assertions, though I am not sure what exactly you're referring to by 'the initial question'. (A question posed in this thread? A question posed to a TM?)

[...] "run it to get the result."
We may inspect the inner workings of the program
I'm not quoting much here, because it is basically most of this portion of your post. The parts I've pulled out, however, imply that the game is fundamentally different for humans and TMs in the respect that humans are allowed more freedom in their methods of determining the result. This is not ever the case. I did mention that humans are usually granted the option to answer differently: by objecting to the exercise, but that is distinct from methods of inspection. Candidate TMs may apply whatever methods you can imagine, so long as they are computable methods--inspection and experimentation is just fine, there is absolutely no requirement to monitor the behavior of the analyzed TM step by step (where did you get this idea?)

we can run our halting detector on a seperate "thread"
Again, sparse quoting. Here I fail to follow your argument to any meaningful conclusion, but I detect that you may have missed a nuance hadn't made explicit. By the time I made >>26 I had the impression that everyone was either already familiar with this nuance (as one should be when arguing the halting problem), or had at least picked up on it.

The nuance is this: The candidate TM is not simply being asked to analyze its own code, but it is being asked to analyze the self-same instance that is performing the analysis. The upshot is that the candidate TM's response plays a deciding (and contradictory) role in the TM's response. (If this has thrown anyone for a loop I apologize, it is my fuckup.)

Moving on,

That's exactly my point... The entire finite resources example was meant to be a quick and tidy demonstration that humans can't solve the halting problem... The fact people have tried to dispute this is somewhat dissapointing.
You were making a terrible case for it. I have to assume you are >>3-kun, or at least arguing from that position, which is precisely what was being disputed for the entire thread:

The halting problem for example, is solvable for any computer with finite resources.

Which is it?

So, in response to:
the argument should have stopped there.
The argument never ever should have been anywhere near there. If you wanted to say "yes, I agree that humans and machines are held to different standards" as claimed in >>1 you should have done so instead of trying to wedge a contradiction in there.

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List