Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

Non-computability.

Name: Anonymous 2010-03-04 21:21

According to Roger Penrose, humans can perform non-computable feats, such as dealing with Gödel questions. He uses this as a foundation to claim that the human mind cannot be expressed in terms of classical processes, and as such must be party to the only other (known) game in town: Quantum Mechanics.

Now, I haven't had the patience to sit through all of his arguments yet, though I slowly make progress. My understanding is that a large part of his stance is that an algorithm cannot usefully deal with a Gödel question, or equivalently, with the halting problem, while a human can.

My objection to this is that such problems always demand a certain quality of response when asked of UTMs: failing to respond forever is not acceptable as correct, nor is providing any response other than one that yields a truth when taken in combination with the question. This much is fine, however, when it is time for the human to answer, he is permitted the liberty of rejecting the question on the grounds that it is inherently unanswerable.

Obviously I am interested in artificial intelligence, and also find his assertion to be simply a self-serving one with a contrived philosophical backdrop for foundation. If anyone knows of, or can think of, a more sophisticated argument than the one above (or expose my flaws in my assessment of it) I would like to hear it.

Apologies for bringing up a largely philosophical question, my only excuse is that I cannot trust any other board with the question.

Name: Anonymous 2010-03-05 18:21

>>40
I'm not sure I agree; I haven't read his treatment of it yet. But then Searle didn't understand the Chinese Room thought experiment either, and in such a way that if you'd said this of him, I'd agree wholeheartedly.

In his treatment of consciousness, I understand that he is in search of (1) entangled states in the human brain in such a way that supports his hypothesis that the brain carries out non-computational activities that are (2) somehow a necessary part of the phenomenon of consciousness. Even if he succeeds in finding (1), (2) is a stupendously difficult task even if the case for it is true.

I am currently reading his responses to criticisms of Shadows of the Mind, I find it's a lot easier to skip to the part where he's defending himself. I haven't found anything of the sort regarding The Emperor's New Mind, which I presume is just buried somewhere in Shadows.

The problem I have with reading his actual arguments straight off is that his books are aimed at people who cannot be expected to have even heard of Turing or Gödel, much less have so much as a cursory understanding of Gödel Incompleteness and the halting problem. It's just mind-numbingly boring for the most part.

In making his defense, he has started to argue semantics. This is not a good sign. Perhaps he is a bad mathematician.

Name: Anonymous 2010-03-05 20:55

COMPUTE MY SEMANTICS

Name: Anonymous 2010-03-05 22:09

>>20
You are misunderstanding my objections and perhaps my stance also.
I stated humans don't really solve the halting problem because their halting analysis is a very generalized heuristic case testing approach. I never said this could tackle the busy beaver, and often it has difficulty with fibs.
It looks more like you are cementing my point? Why the hostility in this case.

even fucking fibs can be proven to terminate only by inventing and verifying a hypothesis about certain invariants in the data it operates on, not by "checking states for loops".
Wrong, see >>16

In regards to >>26
your procedure still falls apart completely when the candidate is asked to analyze itself, as previously illustrated
This is something of a silly argument and I don't think it changes the result of the initial question.
In a strict sense, yes there is a contradiction or paradox when run on itself but this actually would occur in a human too if they were made to black box the program they were analysis and simply "run it to get the result."

Humans don't necessarily encounter a paradox because we aren't forced to follow to the same constraints imposed on a machine. We may inspect the inner workings of the program as we like and at any time "terminate" its execution.
This is actually rather unfair and if humans followed stricter definitions of the problem we would be forced to infinitely evaulate the program in our head to determine if it loops.
If you think this kind of thing is allowed, then as argued, we can run our halting detector on a seperate "thread" (or rather, read in the "source code" for the TM under test and simply analyze it step by step, we don't need to run it in it's own execution context) or similar and simply examine only as many states as we need to determine if the program halts or not. At which point, we could leave it running and simply return from our own "thread", loop infinitely, or whatever else you might want to do.
Any behavior a human exhibits can be emulated by a machine, we are simply holding machines to a higher standard in a brazen attempt to glorify ourselves as something more than rule based DFA.

>>18,26
but it [heuristic approach] doesn't solve the halting problem
That's exactly my point. It's what humans do (or at least it's what I do), and it's not comprehensive.

The entire finite resources example was meant to be a quick and tidy demonstration that humans can't solve the halting problem (or at least we can't verifiably demonstrate that they can in this uinverse) and the argument should have stopped there. The fact people have tried to dispute this is somewhat dissapointing.
I agree, however, for the purposes of this discussion it's probably not necessary, and people can have a more in depth conversation that eventually reaches the exact same conclusion if they like.

Name: Anonymous 2010-03-05 22:16

Any behavior a human exhibits can be emulated by a machine, we are simply holding machines to a higher standard in a brazen attempt to glorify ourselves as something more than rule based DFA.
Except we still haven't emulated the human brain, but that's because we don't have enough resources, not because the brain is some magical machinery that can't be emulated(no such thing exists).

Name: >>44 2010-03-05 22:20

I guess my statement wasn't entirely correct as machine architectures which can't be emulated with a turing machine may exist(operations with reals? math), but it may be possible to implement them using different hardware which means the statement still holds.

Name: >>44 2010-03-05 22:32

Another thing is that everything actually works at a physical layer: both our brain and our chips. Things like CMOS logic are just conventions on how to interpret digital 0 and 1 for a given physical layer, but the operations are done at an analog level. The same idea can be applied to the brain: while neurons work at an analog layer, their outputs interpretation is still somewhat binary, which means if you emulated a human brain, there could be errors, but those errors would be insignificant (and even if they could result in output which would mean non-ideal emulation of a human brain, would it truly matter? the human brain makes a lot of mistakes as it is, humans are far from perfect). The actual instructions to grow a brain can be expressed in a discrete manner as well - DNA. Saying that we're not a machine is silly. Maybe we're not a machine we'd design directly (generate a program to approximate a function using a genetic algorithm, try reverse engineering it and you'll see how little sense guided randomness makes), maybe it's imperfect and non-deterministic, maybe it's so incredibly complex and unpredictable, but that doesn't make it less of a machine.

Name: Anonymous 2010-03-05 23:16

>>43
This is something of a silly argument and I don't think it changes the result of the initial question.
This argument isn't silly, but that being irrelevant I'll cut to the chase: the argument is contrived. It was contrived precisely as a case for reductio ad absurdum to illustrate the halting problem. Since it does its job, it has a great effect on your assertions, though I am not sure what exactly you're referring to by 'the initial question'. (A question posed in this thread? A question posed to a TM?)

[...] "run it to get the result."
We may inspect the inner workings of the program
I'm not quoting much here, because it is basically most of this portion of your post. The parts I've pulled out, however, imply that the game is fundamentally different for humans and TMs in the respect that humans are allowed more freedom in their methods of determining the result. This is not ever the case. I did mention that humans are usually granted the option to answer differently: by objecting to the exercise, but that is distinct from methods of inspection. Candidate TMs may apply whatever methods you can imagine, so long as they are computable methods--inspection and experimentation is just fine, there is absolutely no requirement to monitor the behavior of the analyzed TM step by step (where did you get this idea?)

we can run our halting detector on a seperate "thread"
Again, sparse quoting. Here I fail to follow your argument to any meaningful conclusion, but I detect that you may have missed a nuance hadn't made explicit. By the time I made >>26 I had the impression that everyone was either already familiar with this nuance (as one should be when arguing the halting problem), or had at least picked up on it.

The nuance is this: The candidate TM is not simply being asked to analyze its own code, but it is being asked to analyze the self-same instance that is performing the analysis. The upshot is that the candidate TM's response plays a deciding (and contradictory) role in the TM's response. (If this has thrown anyone for a loop I apologize, it is my fuckup.)

Moving on,

That's exactly my point... The entire finite resources example was meant to be a quick and tidy demonstration that humans can't solve the halting problem... The fact people have tried to dispute this is somewhat dissapointing.
You were making a terrible case for it. I have to assume you are >>3-kun, or at least arguing from that position, which is precisely what was being disputed for the entire thread:

The halting problem for example, is solvable for any computer with finite resources.

Which is it?

So, in response to:
the argument should have stopped there.
The argument never ever should have been anywhere near there. If you wanted to say "yes, I agree that humans and machines are held to different standards" as claimed in >>1 you should have done so instead of trying to wedge a contradiction in there.

Name: >>47 2010-03-05 23:27

The reference to >>3-kun should be >>2-kun.

Name: Anonymous 2010-03-06 1:26

>>47
The original question, I think, was one of the following:
1."is Penrose a quack or a genius?"
2."can an UTM solve the halting problem?"
3."does being able to stop trying to solve the halting problem for any given reason mean that the human mind can not be emulated on a TM?"

For the record, I am against the assertion that the human mind's ability to "give up" means that it is too complex to be replicated by a program.  The argument that humans can do this and that in analysis or alternate analysis of the program is a non-sentient racist remark; why can we not extend to the program more of the ability that we have, and keep going until even it can "give up" for a given reason?  The stopgap is that we don't always have accurate understanding of what motivates us to not follow instructions to the rule, like a "program," so coding this as hard rules appears to be impossible.  As long as you can trace a reason and create a rule for it, the program and the human mind can be made equal.

The halting problem, by the way, is NOT a hardware-related problem and should not be transformed into one.  The halting problem is one of computing - as the thread subject so aptly points out - which preexisted circuitry.  Turing Machines have a tape (stack, memory, whatever you want to think of it as) that is defined to be infinite, so space is no issue.  TMs are emulators for problems; they perform no analysis; no foresight; no objections; they just run programs on input and, when necessary, produce output.

Name: Anonymous 2010-03-06 1:56

DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU DESU

Name: Anonymous 2010-03-06 2:19

Quantum Mechanics is computable. Your argument is invalid.

Name: Anonymous 2010-03-06 3:57

>>51
0/10

Name: Anonymous 2010-03-06 10:29

>>49
The original question, I think, was one of the following:
1."is Penrose a quack or a genius?"
2."can an UTM solve the halting problem?"
3."does being able to stop trying to solve the halting problem for any given reason mean that the human mind can not be emulated on a TM?"

Not quite, but at least now I know you're referring to >>1

The phrasing of (1) could perhaps be radically altered to reflect the original question. Insofar as the "silly" argument is concerned: it serves to change the argument between those who would assert that the halting problem has a solution and those who do not. Otherwise, it's not of any grave importance because there are other means of demonstrating that the halting problem cannot be solved.

>>51
Quantum Mechanics is computable. Your argument is invalid.
No one here has made any argument that requires QM to be non-computable. Penrose has, but he is presumably absent: who are you addressing?

On the other hand, QM has not been shown to be computable. Our current understanding of it suggests that it is not.

Name: Anonymous 2010-03-06 11:27

>>53
Archangel Gabriel had made a mistake when he cleaned his trumpet. He accidentally played it. But playing the trumpet means the armageddon. All demons are set free.

Name: Anonymous 2010-03-06 12:30

>>40
The Chinese Room thought experiment is disgusting on many levels and Searle should be embarrassed that, as a professional philosopher, his name is attached to it.

Name: Anonymous 2010-03-06 12:51

>>28
No questions.

Name: Anonymous 2010-03-06 13:34

>>55
He could almost be excused; AI research at the time was very naive.

His mistake was when he want on to say that a mind cannot simply be a product of computational processes ("dependent on actual physical-chemical properties of actual human brains.") Here he differs from Penrose in that he doesn't try to show a meaningful difference. Penrose could conceivably be forgiven for making a mistake, but Searle doesn't have that luxury when he simply asserts a difference.

The disturbing part is that while making the distinction between what particular processes are permitted (brains are, artificial computational devices are not) to create this higher-order phenomenon (the mind), he accuses those who think otherwise of dualism. Let that sink in.

There are other serious issues, but this is the worst by far. The others I am happy to agree are mistakes worth debating. As for this problem, I can only agree that the man should be ashamed.

Name: Anonymous 2010-03-06 16:42

How does I used evolved heurstics for approximate computation?[su]†[/su]

________
Penrose is a genius who doesn't understand computation or psychology, while Searle and Pinker are idiots who can bear the idea that humans aren't `special'.

Name: Anonymous 2010-03-06 17:54

Name: Anonymous 2010-03-06 20:51

>>58
That's okay, both Searle and Pinker are quite special in at least one other way.

Name: Anonymous 2010-03-07 0:17

>>57
I'd not noticed that aspect before. Kind of interesting. My biggest problem with it is the notion that it depends on a first-person perspective for its weight as an argument. That is, it depends on the person imagining himself in the box, because there are no requirements for self-appraisal. The moment you consider someone else in the box, you have to ask, "How do I know the man in the box doesn't understand chinese?" And then you are forced into the position that you can't question him, you can only insert questions in the box. But then, well, he does understand chinese.

It's like a philosophical conjuring trick.

Name: Anonymous 2010-03-07 0:55

>>61
But then, well, he does understand chinese.

I've only digested about half of Searle's back-and-forth on it, but I gather that the clearer he makes himself, the more he is abusing the terms 'intelligent' and 'understand' -- stuffing the whole consciousness-awareness-sentience-qualia spectrum in there.

So yeah, "the room" understands Chinese for any useful definition -- that's the premise of the thought experiment after all. He sort of plays bait-and-switch with semantics, and so we end up with his other version of the term.

Again, he was going after people who were doing just that themselves. Their claims amounted to asserting that whatever it was they were doing would lead to all interesting features associated with intelligence and consciousness. Looking back, there was a lot of ignorance, Jeff Hawkins claims that MIT wouldn't let him study the brain as part of their AI program. (Myself, I find even Hawkins to be ignorant in a few important ways, but if what he says is true--which he got away with saying at TED--then the situation back then was pretty bad.)

Searle could have saved himself a lot of trouble by forgetting the portion he didn't need to (and couldn't) show, limiting himself to stifling the wild claims instead of trying for the whole pie by making wild claims himself. I'm still reeling from the fact that he works himself up to saying what sounds to me like "something special, indeterminate, and impossible to synthesize is a necessary part of the process" and then tries to imply this stance is anything other than dualism.

Name: Anonymous 2010-03-07 9:25

>>57
He could almost be excused; AI research at the time was very naive.
In 1980? That's pushing it a bit.

Name: Anonymous 2010-03-07 13:44

>>63
I don't think so. Things were pretty bad. They still are: http://www.loebner.net/Prizef/loebner-prize.html

If that's not bad enough, in 2000 and 2001, Eliza2000 (given name: ALICE) won. Richard Wallace (author of ALICE) has made the claim that his she may well be conscious, the evidence being that she says so herself--and how could we possibly demand any better proof? So there are still people making that kind of claim, and Hugh Loebner is supplying the soapbox/podium and issuing press releases.

The Turing Test has little to do with the current state of AI (though researchers now seem to reject that term) but many people are still making the same mistakes, particularly in believing that scaling up anything will fill out the gaps. Unless you can actually show in detail that it is a matter of scale, it almost certainly is not.

Name: Anonymous 2010-03-07 14:16

>>64
We know the scale of the human brain, which is so far the only 'device' we know which has achieved sentience (I'm not talking about just intelligence here). While most of the machinery in the brain is needed for sentience, even the part that is needed is still rather large. I'm sure a much more simplified model is possible, but we have not found it yet, so until we do, it will be a matter of scale.

Name: Anonymous 2010-03-07 15:01

>>65
There is certainly a matter of scale involved, but scaling up without further informing the model is useless in most cases. There are, I admit, discontinuities in matters of scale: with gravitational mass, black holes make an easy example.

Working towards a larger scale will help, since we can include all that we see but do not understand. It will also probably turn out that our current capacities do not meet the requirements in the end. I don't deny this, but what I mean is scaling up only affords us room to carry out our experiments, it does not magically give us the answer--except in the case where the answer happens to be a discontinuity of scale among currently understood phenomena.

I want to tease out that last bit. If it is the case that what we are lacking appears at a discontinuity of scale, we have no right to simply assume it. Black holes were predicted early on, based on our theory, and their observation amounts to confirmation of the accuracy of general relativity in the extreme case... and things are looking good. If we hadn't predicted them, it would be much harder to find them, or explain them once we did. What I'm trying to say is that we can always predict the discontinuity and (unless it is singular) we can continue to apply our existing understanding of the phenomenon.

All that said, I rather like the idea of a singular discontinuity being responsible for the more interesting things that go on in the mind (eg. sentience--whatever that really means--I don't think we'll know until we've found out, and I'd argue we haven't found out.) I am extremely careful never to postulate it, however, because that is a vicious trap.

Name: Anonymous 2010-03-08 1:00

>>47
You're right. I am familiar with the proof, however I simply took for granted that for some reason it wouldn't apply given that we can prove the halting problem is solvable with finite resources. I haven't actually replied because I'm not really sure about this. If the proof of uncomputability (of the halting function) is still rigorous given a finite resources environment (and this appears to be so), but the proof of computability is itself just as rigorous, where is the problem? Obviously there must be an ill fated assumption somewhere that I'm not seeing- care to point it out?

Name: Anonymous 2010-03-08 8:57

If the proof of uncomputability (of the halting function) is still rigorous given a finite resources environment (and this appears to be so), but the proof of computability is itself just as rigorous, where is the problem?
What are you asking here? - it isn't quite clear to me.

I'm sure that someone has taken on the scenario with finite resources rigorously. Here (in this thread) we really have only the situation where it falls apart due to resource saturation. A universe where the analyst candidate has the privilege of having much more or infinite resources changes the argument here, but beyond that I'm not sure that case can be thought to be meaningful; if the universe of computation has finite states available then some process could be contrived to be so large as to defy analysis.

It seems, also, that a case can be contrived for a finite TM which cannot be analyzed even from the privileged position. Is this the paradoxical consequence? It sounds like it to me.

Name: 68 2010-03-09 0:28

>>68 Cont'd.

I'd intended to work out the case for the arbitrarily-privileged-but-finite analyzer today. I haven't. I'll assume it solves for now because its capacity to enumerate states should do the job.

With respect to >>1 I don't believe this changes the game, because we may have lost a particular Gödel sentence, but not Gödel incompleteness as far as I can tell. The adapted 'liar paradox' still seems to do the trick: "G is not provable in the theory T.", where G is that statement and we are asking T to prove it. (I haven't really considered the possibility of universes too small for any possible G to exist. I can't imagine they'd be meaningful--in the computing model we'd have lost what we know as Turing Completeness, even the finite compromise equivalent.)

Name: Anonymous 2010-03-09 23:47

>>65

The human brain is the only device to achieve sentiencesapience because it is defined as sapient, and sapience is defined in terms of human thought. It's maddeningly circular.

Name: Anonymous 2010-03-09 23:55

>>70
Arguing semtantics won't get us anywhere.
Too bad a proper definition of what exactly is human intelligence and especially the ability of conscious thought(not just unconcious one) is unknown to me.

Name: Anonymous 2010-03-10 3:03

I'm pretty sure at least some of the other apes are considered sapient by any reasonable standard.

Name: Anonymous 2010-03-10 9:08

>>70,71
There is a bit of a frustration of terms. Above and beyond basic intelligence (in the sense of problem-solving), limited consciousness (in the sense of alertness) and awareness (in the sense of having access to information to make use of in intelligent behavior), the terms are heavily contested in semantics and the associated concepts are often argued to evaporate under scrutiny. Few philosophers will stop with just these concepts, but this is roughly where broad agreement stops.

Personally I'm happy to consider all of the above to be in the domain of machines without worrying about it. Only when we go into the more robust concepts of (say) consciousness do we have to start asking hard questions. The problem is really that we often don't know what we are talking about, and hunting for that is the larger part of the problem. One good thing about Penrose's argument is that he at least tries to take something concrete and well-defined and tries to argue that a TM can't produce it. The problem, of course, is the double-standard applied (which is analogous to saying that other animals cannot be intelligent no matter what they might do or what features they might have simply because they are not human. It's a valid concept of intelligence, but generally useless... it's not 'analytic'.)

Name: Anonymous 2010-03-10 10:25

>>73
Chances are a TM may be able to produce all those behaviours as our brain itself is likely possible to model as a highly parallel TM, and there's no question that a TM itself can be programmed to emulate or exhibit intelligence in one form or the other. The problem is that humans don't understand themselves well-enough to be able to replicate and model this behavaiour into other devices. Our own self-reflecting behaviour is truly confusing to us making us think that it may be impossible to replicate this in a TM, but there is really no way to prove that without first understanding what exactly is this element of our intelligence, and if it even exists to begin with. Defining such type of intelligence as being human is mostly silly as there's no proof that it doesn't exist elsewhere, we're just only aware that it exists within us, but it's impossible to prove that it doesn't exist elsewhere or that it can't be engineered/made to exist somehow.

Name: Anonymous 2010-03-10 10:40

Penrose is such an asshole

Name: Anonymous 2010-03-10 14:58

>>74
and there's no question that a TM itself can be programmed to emulate or exhibit intelligence in one form or the other
I agree with this, because I've seen it done. The author of the 'Creatures' game wrote a book (Creation, Life and How to Make It.) about building the intelligence. It is a phenomenally elucidating read, and just about anyone can understand it. Going back to the point, his creatures certainly exhibit something I'd call intelligence, if not 'much' of it.

Our own self-reflecting behaviour is truly confusing to us making us think that it may be impossible to replicate this in a TM,
That's an interesting way to put it. I agree that it poses a hurdle, but we should realize that reflection is a necessary component of anything intelligent (I would argue it could be shown that anything that does not employ reflection cannot be intelligent--even if it could respond to all the same cases, in which case I'd call it an 'oracle' or 'demon' or something similar.) This implies a bit of knowledge about what intelligence is, but I think that can be taken care of simply by requiring it to be a wholly natural process.

but there is really no way to prove that without first understanding what exactly is this element of our intelligence, and if it even exists to begin with.
'Qualia' commonly receives the blame here. Some say they are merely an 'illusion'. Daniel Dennett has done quite a number on qualia--to the point where I have to agree with him when he says the term is so useless as to be unsalvageable. He has the guts to go deeper and consider what really is going on, instead of being dismissive (unlike most philosophers who would deny qualia.)

I don't think we'll find anything that is truly impossible to reproduce within computation. Anything we'd like think of that way will probably turn out to be ill-defined or completely fictitious.

Name: Anonymous 2010-03-10 17:01

>>76
Look at that sage, yet you put so much effort in to that post! Here, let me bump it for you.

Name: Anonymous 2010-03-11 1:54

wow...

Name: Anonymous 2010-03-11 4:05

/prog/riders being trolled by someone claiming that the halting problem is easily solved?

Unthinkable!

Name: Anonymous 2010-03-11 6:33

>>79
I don't think anybody claimed this. What was claimed is that the halting problem is solvable for specific cases using various heuristics. The same is true for many undecidable problems.

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List