If an AI appears to be conscious in every observable way, then why would it matter how it accomplishes that? Keep bullshit kike philosophical circlejerking like this out of computer science.
Name:
Anonymous2014-03-18 15:59
hax my anus
Name:
Anonymous2014-03-18 17:10
Well, there isn't an AI that is concious in an observable way. We need to first understand whether it is even possible, or whether we are even concious
I agree that it is overused by people who don't really understand to claim that strong AI is impossible in principle, but that shouldn't diminish it's usefulness as a thought experiment when used correctly.
Name:
Anonymous2014-03-20 11:42
>>10
I've never heard the position that strong AI was impossible in principle argued from anything other than a religious stand point (cumpooter caint haf sowl), and even that was in philosophy 101. Who does this from a CS background?
Name:
Anonymous2014-03-22 21:41
The analogy claims that the computer, like the english-speaker in the chinese room, only obeys instructions, without understanding what they mean or what they accomplish.
Name:
Anonymous2014-03-22 23:36
Those old raisings at CSAIL should read their SICP at least once.
Name:
Anonymous2014-03-22 23:47
>>12
Fairly sure everyone knows that, dipshit. I claim that it's a meaningless thought experiment. If all empirical evidence claims that it has some property then it is simpleminded mysticism.
Name:
Anonymous2014-03-24 0:59
What if we take the human mind as the room?
There are lots of little people (cells) which know little of the world outside, and they follow instructions without understanding what they mean or accomplish, that somehow ends up as (hopefully) intelligible output?
Name:
Anonymous2014-03-24 1:19
Maybe intelligence is the ability to learn / adapt ?
If some being/thing knew everything, it's intelligence might become static, such that it might never answer the same question differently, since it would never learn anything new?
It would be very clever but still unintelligent?
Name:
Anonymous2014-03-24 2:17
>>16
I think that intelligence would be a prerequisite for cleverness.
>>17
My cat is clever because she knows that if she waits until I leave, she can tear up the furniture, but she is still so unintelligent that she can't catch the laser pointer.
My dog is intelligent because she can bark the number of times corresponding to the number of fingers I hold up on one hand. She is not clever because she still thinks I like it when she leaves dead shit in my bed, despite punishing that behavior.
My goldfish is not clever because he cannot figure out that the glass is solid no matter how often he runs against it. Nor is he intelligent because he hasn't even figured out how not to die every two months.
My gimp is clever because he figured out that I would beat him if he didn't call me Master Princess Hime-sama whenever speaking after only a few times. He is also intelligent because he did all the homework for my GED for me.
It has implications for security. I've heard it argued by cryptoanalysts that if the output of the translation is transmitted from one part of the stack to another, and it does so consistently, then the message isn't secure. The binary output can be visualized, inspected, and probabilistically reconstructed.
Name:
Anonymous2014-03-25 19:21
It's a neat thought experiment because it illustrates both how a series of dumb steps can be used to create a very smart component, and how a very smart component can be used to perform a series of dumb step.
>>25
I don't understand. Is that like feminist literary criticism?
Name:
Anonymous2014-03-26 23:38
>>26
Yes, but it rejects the Marxist theory that all are equal and the idea that individual narratives and consent matter. It's also post-postmodern, which oddly makes more sense.