Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon.

Pages: 1-4041-

Conscious AI hypothetical

Name: Anonymous 2012-01-15 23:43

Suppose we have a conscious computer program; a program which, when executed, is conscious. We save the memory states and IO from the program at every stage of its execution, and then replay each state, one by one, on a second computer, without actually executing the program. Is the second computer conscious too?

In other words, if a computer is conscious, does its consciousness derive from the instructions it executes, or from the series of states it transitions between?

Name: Anonymous 2012-01-15 23:57

"consciousness" is a buzzword.

Name: Anonymous 2012-01-16 0:02

You're conscious, anon. You've experienced being conscious (well, you've experienced things at all). You know it as well as I do. People just have a hard time admitting it because we don't know how consciousness works.

Name: Anonymous 2012-01-16 0:02

HOW CAN I LEARN AI?

Name: Anonymous 2012-01-16 0:03

>>3
well, you've experienced things at all
Your ass experienced things at all.

Name: Anonymous 2012-01-16 0:07

>>4
"AI" is a buzzword. Smart jews invented it to write junk papers and get millions of grants. The only nice thing they produced was Lisp.

See http://en.wikipedia.org/wiki/Ai_winter

Name: Anonymous 2012-01-16 0:11

>>2-6
Definition of conciousness: Consciousness is a term that refers to the relationship between the mind and the world with which it interacts. It has been defined as: subjectivity, awareness, the ability to experience or to feel, wakefulness, having a sense of selfhood, and the executive control system of the mind.

Name: Anonymous 2012-01-16 0:15

>>7
Vague definition. By it any animal has conciousness.

Name: Anonymous 2012-01-16 0:17

>>1
Computational processes are abstract beings that inhabit computers. As they evolve, processes manipulate other abstract things called data. The evolution of a process is directed by a pattern of rules called a program. People create programs to direct processes. In effect, we conjure the spirits of the computer with our spells.

A computational process is indeed much like a sorcerer's idea of a spirit. It cannot be seen or touched. It is not composed of matter at all. However, it is very real. It can perform intellectual work. It can answer questions. It can affect the world by disbursing money at a bank or by controlling a robot arm in a factory. The programs we use to conjure processes are like a sorcerer's spells. They are carefully composed from symbolic expressions in arcane and esoteric programming languages that prescribe the tasks we want our processes to perform.

A computational process, in a correctly working computer, executes programs precisely and accurately. Thus, like the sorcerer's apprentice, novice programmers must learn to understand and to anticipate the consequences of their conjuring. Even small errors (usually called bugs or glitches) in programs can have complex and unanticipated consequences.

Name: Anonymous 2012-01-16 0:19

>>6
Minsky, Kurzweil
billion-dollar AI industry began to collapse

It's hard to imagine jews, without having a billion-dollar bubble around. You just cant have a jew separate from a ton of dollars - it would be a contradiction.

Name: Anonymous 2012-01-16 0:23

>>8
sense of selfhood, and the executive control system of the mind
Prove that animals have sense of selfhood. AND executive control system of the mind.

Name: Anonymous 2012-01-16 0:25

Prove that "sense of selfhood. AND executive control system of the mind." arent buzzwords.

Name: Anonymous 2012-01-16 0:39

>>1
Hey Dan.

Practically speaking, you can't meaningfully recreate the states of a conscious entity. (See >>7's comment, look for the word 'relationship')—you can reproduce internal states all you want, but non-locally they are not the same states because they correspond differently with the outside world.

If you wish to consider an insulated simulated environment then Consider how it depends on which side of the simulation you're on. If you're inside you will always conclude the affirmative during the replay. If you're outside the simulation you are free to conclude differently, but doing so would deny the validity of the conclusion that there ever was consciousness. You can bail out by saying these kinds of judgments are meaningless, though I've yet to find a compelling argument against the idea that 'everything in the universe has consciousness'. tl;dr: 'all' or 'nothing'—both are consistent.

Name: Anonymous 2012-01-16 0:40

>>1 My guess is that the second computer isn't conscious...

Hopefully, you could have a sort of open feedback loop, so

[instructions] -(causing)> consciousness -(in turn affects)-> state transitions
+
[state] -affects> consciousness -(in turn causing)-> [new instructions]

workable?

Name: Anonymous 2012-01-16 0:51

During the 1960s, the Defense Advanced Research Projects Agency (then known as "ARPA", now known as "DARPA") provided millions of dollars for AI research with almost no strings attached. DARPA's director in those years, J. C. R. Licklider believed in "funding people, not projects"[18] and allowed AI's leaders (such as Marvin Minsky, John McCarthy, Herbert Simon or Allen Newell) to spend it almost any way they liked.

lol americunts are so stupid!

Name: Anonymous 2012-01-16 1:14

>>12
Classical experience: Everyone can be robots lying to you, but you can't lie to yourself, because you'll know that it's a lie. Try to punch a wall and you'll feel the pain. It isn't a lie. This proves that your thinking (mind) can interfere “worldspawn” and receive responses from it. This stated, you can't be “worldspawn”, and so you are an individual. Self consciousness is this conclusion.

Exercises:
Is your body part of you as an individual?
How can you test if an entity in the “worldspawn” is conscious?
Can animals be conscious?

Name: Anonymous 2012-01-16 1:23

>>11
It's not a good definition, but by it some do and some (presumably) don't. Executive control (as much as humans have) can be easily concluded from brain structures. Sense of self is trickier, but some clever experiments have shown it in primates. It's easy to see in others too, but it really comes down to what you will accept as evidence.

On the other hand, if you want to be pedantic about it, I would suggest not taking it as a given that any human has these qualities either, and demand equally adequate proof.

Name: Anonymous 2012-01-16 1:42

If a computer is concious, can we ever, within moral reason, turn it off?

Name: Anonymous 2012-01-16 1:47

>>8
So what?  That any matter can somehow have an awareness of any kind is pretty fucking crazy amazing.  Software can't and none of our hardware invention can either.  AI is a misnomer because people who develop the software know that they are only simulating intelligence at best.  Simulation is a walk in the park compared to making an actual thinking apparatus.

Name: Anonymous 2012-01-16 1:51

>>11
Prove that anyone that isn't you isn't an illusion that you have conjured, if you want to go down that rabbit hole.

Name: Anonymous 2012-01-16 1:58

>>16
It would be pretty fuckin' weird if all animals were mere flesh automatons but somehow we aren't (too much like the idea that only we have souls for my liking).  I think a more useful question is what neural combination or construction is necessary for any kind of mind/consciousness whatsoever.  A snail's?  An insect's?  Or what?  How can we even figure it out ever?  Is there or will there be a way?

Name: Anonymous 2012-01-16 2:03

>>19
[...] matter can somehow have an awareness [...].  Software can't and none of our hardware invention can either.
Care to prove any of those statements?

Name: Anonymous 2012-01-16 2:06

>>22
No more than I can prove a rock or a fingernail clipping doesn't, silly.  Why even ask?  Do you think an abacus could be aware?

Name: Anonymous 2012-01-16 2:26

>>23
Do you think an abacus could be aware?
Since I have no basis for eliminating it as a conscious entity, yes, I have to accept that possibility.

Why even ask?
Because the claims were made without any kind of support.

Name: Anonymous 2012-01-16 5:09

>>1
Suppose we have a conscious computer program; a program which, when executed, is conscious. We save the memory states and IO from the program at every stage of its execution, and then replay each state, one by one, on a second computer, without actually executing the program. Is the second computer conscious too?
No. You seem to be recreating the "Movie Graph Argument"(MGA), also previously presented in "The Sandman" (Klara vs Olympia).

In other words, if a computer is conscious, does its consciousness derive from the instructions it executes, or from the series of states it transitions between?
It derives from abstract relationships existing "timelessly"/"platonically". It cannot be directly localized as a physical process, otherwise you end up with the same problems as seen in the MGA, or "China Brain" thought experiment or "Searle's Chinese room". Another way to go about it is to say that a particular AI implementation allows a particular consciousness to manifest relatively to you (of course, it may very well be the only instance that particular consciousness has, but a state copy/replay shouldn't conscious).

By the Chuch Turing Thesis, the actual implementation shouldn't matter either, as long as relationships are preserved. Taking it a bit far, if we accept such immaterial mathematical entities, why not say that physics is just an inevitable abstract mathematical relationship by itself and allows you to manifest with relation to other conscious entities within some particular abstract structure you call the "universe".


There is of course the other possibility that our brain is a magical thing containing concrete infinities (thus machines might not have the same properties), but I really doubt this myself - there's no evidence for it and neuroscience is unravelling more and more of our thinking processes.

References to MGA:
http://groups.google.com/group/everything-list/browse_thread/thread/201ce36c784b2795/
http://groups.google.com/group/everything-list/browse_thread/thread/18539e96f75bb740
http://groups.google.com/group/everything-list/browse_thread/thread/a0e1758bf03bc080/
Reference to the more complete argument:
http://iridia.ulb.ac.be/~marchal/publications/SANE2004MARCHALAbstract.html

>>2
Do you think an abacus could be aware?
Certainly not the same awareness as our own. Our specific brand of consciousness requires preserving the same internal structure/relationship invariance.

Why the invariance? http://consc.net/papers/qualia.html

Name: >>25 2012-01-16 5:18

I also forgot to mention that there is another way to resolve the problem as well: claim that consciousness doesn't exist (for anyone/anything), to avoid having to posit immaterial entities and to avoid reducing physics and consciousness to math/abstract computation. I think that's the position taken by Dennett. However, if you take the first position (abstract one), you can actually derive a good part of quantum mechanics from it, which IMO means that it's a choice worth considering seriously.

Name: Anonymous 2012-01-16 5:46

Conciseness, as most of you define it here, would simply be the result of having the need for something to produce adequate models of the future as based on previous experience.

To produce such models the apparatus producing it must include itself in those models if it wishes to interact with the outside world in the future that it imagines.

The several iterations of the latter produce in our minds "the self".

Natural selection (as a feedback loop) has judged the beings with an adequate amount of these models to be quite successful at our scale.

Of interest here is that any further complexity which leads to "the self" being able to get lost in those models instead of forgetting to simply use them to better interact with the surrounding world leads to neets/philosophers etc... which ultimately leads to the lowered chances of passing on the gene set with a heightened likelihood of producing more of such individuals into the next generation.

TL;DR: "the self" is an illusion... or is nothing special to say the least; if one ever had to solve "the /chech/PARADOX!" -- we all know what we would choose.

Name: Anonymous 2012-01-16 5:52

>>27
You define it rather differently than me. It's merely awareness, "existing", or how "data" feels when it's being processed. Self is something more specific than that.

Name: Anonymous 2012-01-16 6:01

>>28
Maybe it does... but than it would mean that we would need to conjure up more dimensions (or something similar) into our standard models of the universe we know use (sure its only sane to admit that we will find out much more about the universe than we do now - dimensions of feeling - or whatever we will wish to call them, perhaps)...

Working with simple models that we do have one can still describe a very complex phenomena rather easily as a scaled up problem solving machine - and I choose to believe it simply because its elegant...

Quite like writing programs in the austere syntaxes of LITHP.

The only problem here is that although i find lisp quite elegant there's not really much to do with it to get things done.

Name: Anonymous 2012-01-16 6:12

>>29
I'm not entirely sure the way we understand this universe's physics would change too much. However, if we do get some conscious self-aware AI, we'd probably be able to much better understand the nature of consciousness, and if we ever become substrate independent minds ("mind uploading"), we'll probably be able to find out even more about our own minds and possible experiences through self-modification.

Name: Anonymous 2012-01-16 7:18

>>30
When I worked at a factory where they had simple manipulators raging from the elaborate servo-motor multi-axis-rotating ones to the pneumatic 3-axis linear machines with binary end-position switches, I actually started to look differently at the nature of what I earlier referred to as living/acting/thinking.

These were rather deterministically acting machines with austere feed-back loops, yet I now fail to grasp how thy are that very different from something "living", with the obvious exception that in most of the cases here there was no talk of "learning"... but neither is there with bacteria (though some [http://www.sciencedaily.com/releases/2009/06/090617131400.htm] may argue that simple on/off mechanisms conditioned by natural selection may be considered such).

To make things simpler let us compare them with viruses: tirelessly doing their job (expressing their nature) - interacting with the environment around them if the conditions allow it (there is electricity, the circuits are ok, and there is something to operate on, etc..).

Now viruses are not by popular consensus considered "life" - but to continue my argument... could one of you, perhaps, present what one considers worthy of being called "life" (I feel both too tired to copy-paste from articles or recall its the generally accepted definitions - so feel free to also extrapolate your own definitions and present one's own views on the subject).

tbc/

Name: Anonymous 2012-01-16 8:29

This thread is full of bullshit.

Name: Anonymous 2012-01-16 8:33

>>32
/thread!
In other news:
[quote]This thread is full of bullshit.[/quote]

    is full of insight

Name: Anonymous 2012-01-16 9:18

>>31
Replicators are life, but that doesn't make them conscious in the same way as you and me. You need learning in an environment and some intelligence for that.

Name: Anonymous 2012-01-16 9:37

>>34
A pattern of electrons on electrical components is memory.
A relational configuration of cells is memory.
Footprints in the sand are memory.
A state of the universe is memory.

You teach the sand by walking upon it.
You teach and learn from the universe by living within it.

Name: Anonymous 2012-01-16 9:51

I feel that for strong AI to be possible at all, the machine has to be built up with a rigorous axiom of self preservation in its set of  operations from some level for the machine to ``validly'' become self-aware, a bit like adding conditional jump makes something Turing complete. Giving robots this axiom would be a  good idea for humans' lasting reign on earth.

And thats my shit theory, now someone give me DARPA funding, please!

Name: Anonymous 2012-01-16 10:03

>>36
Humans don't have it an axiom, it's mostly an emergent property due to having a reward/motivation system.

Name: Anonymous 2012-01-16 10:03

*as an

Name: Anonymous 2012-01-16 10:52

>>35
...
You teach and learn by existing as a part of the universe.

Now the subject we can investigate if we do wish to discuss the matter further - is time and scale...

People say that there is also some element of inherent complexity and ability to "act" whatever that means.

Stars are quite complex things - if we collide two sun-sized stars - we could hardly predict the outcome even with the accuracy of an earth-sized discrete-state-grid in regards to basic average physical parameters around the collision area - doesn't it make the stars "complex"?

... thinking about times-scales will be left as an exercise for the -yadda yada yada - since I'm off for home!

Name: Anonymous 2012-01-16 11:02

i think that consciousness is a "meta" feature... you know, because the sum of the components is not what it makes an entity. The relationship between them is as important as each part, so, relationship+parts could spawn self-awareness. What i'm saying is that "consciousness" is in another "plane" different from the one that parts and relationships exists.

Maybe 2nd order cybernetics could help here? Autopoiesis?

Name: Anonymous 2012-01-16 11:05

>>40
s/the one that parts/the one where parts/

Name: Anonymous 2012-01-16 11:06

>>37
I'd rather say "goals" instead of "reward/motivation" system. It's more general.

Name: Anonymous 2012-01-16 12:06

>>42
Goals are a lot more high-level. They exist in our neocortex, which is trained by both the environment and biased by the reward/motivational system. I'm not entirely sure we'd ever develop any goals (or do anything) without the initial stick/carrot that the old brain represents. It may not be the ideal thing for an AI to have, but for an organism living in a physical environment such as ours, it's essential.

Name: Dubs Guy 2012-03-22 13:54

DUBS, DUBS EVERYWHERE!

Name: Anonymous 2012-03-22 13:55

>>44
nice dubs bro

Name: Anonymous 2012-03-22 14:01

>>44
Dubs are a lot more high-level. They exist in our neocortex, which is trained by both the environment and biased by the reward/motivational system. I'm not entirely sure we'd ever develop any dubs (or do anything) without the initial stick/carrot that the old brain represents. It may not be the ideal thing for an AI to have, but for an organism living in a physical environment such as ours, it's essential.

Name: Anonymous 2012-03-22 15:24

Learn C++11

Name: Anonymous 2012-03-22 20:17

It's actually very simple. We don't even need to define consciousness: as long as we agree that consciousness implies being able to react to stimuli from the outside world (which it does) it's obvious that the second computer isn't conscious; that would be like saying that a video of a person is conscious: even if it looks like a person, you can't ask it questions.

Name: Anonymous 2012-03-22 21:06

>>48
I was going to tear you a new one for being an asshat with that "we don't need to define consciousness" BS because people follow up with something along the lines of "whatever makes us special" (or sometimes the crazy talk: "there is no difference everything is an illusion I'm not a nihilist I'm an existentialist hear me roar.")

However you're exactly right. The only thing we really can say about it is that it involves a deeply causal relationship with the subject, typically "the outside world."

Name: Anonymous 2012-03-22 22:02

50 GET

Don't change these.
Name: Email:
Entire Thread Thread List