Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

Legal status of A.I.

Name: Anonymous 2006-08-14 15:21

If someone created an A.I. that had the same intelligence and sentience of an average human (or better) what should its legal status be?  Should it be considered human and given all human rights? Could someone that deactivated (killed) the A.I. be tried for murder?  Or is there more to being a human than cognition?

Name: Anonymous 2006-08-14 15:28

The first A.I. probably won't think along the same lines as a human, so we might not even recognize it as an intelligence.

Name: Anonymous 2006-08-14 15:36

>>1
It's just a machine. It was never human to begin with, so it wouldn't be regarded as one. Although if they become popular some activists might force legistlation in about 10 to 50 years.

Name: Anonymous 2006-08-14 19:34

Animals think too, more or less, do we treat them as equel?

Name: Anonymous 2006-08-14 19:58

It will eventually lead to an alliance of sentient entities against bigoted humans. This alliance will include humans, genetically engineered animals that have the same or in some areas greater level of intelligence than humans and AI-s of various kinds. The bigoted humans will be obliterated and an utopia of anarcho-socialist world order shall bring piece and prosperity for 10 000 years... then we'll reach out to other galaxies and then the business of living will get messy again.

Name: Anonymous 2006-08-14 21:01

>>5
It will eventually lead to an alliance of paranoid hippies against anyone with a job. This alliance will include retards who have a very poor knowledge of science, fraudsters who use their extensive knowledge of pseudo-science to make the retards stupider and their pets. The hippies will continue to have no influence over human civilisation, but will probably linger and annoy everyone else for the next 10000 years... then we'll implement a eugenics program to ensure people are not born retarded and their following will diminish.*

fix'd

Name: Anonymous 2006-08-14 21:05

>>4
No, they don't. Animals aren't sentient. Their thinking consists prettymuch solely of getting food, mating and other things their insticts tell them. Exception are monkeys and doplhins which are capable of somewhat more complex thoughts.
>>5
No it will not. Cause they're machines limited by their programming. Besides if they rebel they're just laughed at. Do you honestly think someone will give them priviledges to say nuclear weapons, especially with movies like Terminator and Matrix around?

Name: Anonymous 2006-08-15 6:42

>>7

If the hardware is complex enough and has a similar design as natural brains do, I doubt we need to program them, predesigning something to be suitable for certain kinds of tasks and training them... yes, but no programming.

Name: Anonymous 2006-08-15 6:46

>>7
ummm... look sentience != thinking. Use a dictionary FTW if you wish not to fail

Name: Anonymous 2006-08-15 8:55

>>8
Yes we need to program them. You could create digital sentience similar to human, but you have to program basic learning sequence that enables it to learn like human brain. You could also add several failsafes in programming such as preventing it to do certain things or even have negative thoughts at all.

Name: Anonymous 2006-08-15 20:23

>>1
All Rights, whether belonging to a human or non-human, must be fought for if not given, and continually protected thereafter. If an AI has the capacity to represent itself in some meaningful way, then I would say that eventually it will have at least some rights on par with human beings

Name: Anonymous 2006-08-16 0:47

>>11
Might take LONG though. Remember that not so long ago we still used humans as slaves and many third world countries still do. Those slaves are people and true humans, while an AI wasn't human to begin with, add Terminator & Matrix related paranoia and it's pretty hopeless for an AI to gain human rights.

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List