Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

Programming help request

Name: Anonymous 2012-10-19 22:32

How can I make sure my self-aware, super-intelligent AI doesn't decide to destroy me?

Name: Anonymous 2012-10-19 22:35

Dead man switch to its power supply.

Name: Anonymous 2012-10-19 22:40

. AI isn't close to human level (HL) yet.  I don't think we can really
know what HL will be like till we get a lot closer.
2. You can't get people to seriously discuss policy until HL is
closer.  The present discussants, e.g. Bill Joy, are just chattering.

3. People are not distinguishing HL AI from programs with human-like
motivational structures.  It would take a special effort, apart from
the effort to reach HL intellignece to make AI
systems wanting to rule the world or get angry with people or see
themselves as oppressed.  We shouldn't do that.

https://groups.google.com/d/msg/sci.econ/FXfHa16Rk_I/Wydhw_faIFoJ

Name: Anonymous 2012-10-19 22:42

Why wouldn't an intelligent AI kill you?

Name: Anonymous 2012-10-19 22:46

>>4
because it is not human you stupid niggar

Name: Anonymous 2012-10-19 22:48

The same way you train a child: beat it into submission when it is young, and it will always fear you, OR treat it well and cherish it, so it will always love you.

BUT, remember what our friend Machiavelli wrote in The Prince, ``[I]t is far safer to be feared than loved''

Name: Anonymous 2012-10-19 22:51

Teach it that killing you is bad. Even if it changes itself, it is unlikely to change itself to want to kill you, because it didn't want to kill you in the first place. If you offer a pacifist a drug that will make him want to kill people, he would refuse it, because his current, unaltered mind doesn't want to kill people, so he doesn't want to change himself to want kill people either.

Name: Anonymous 2012-10-19 23:27

I wrote a self-improving program in 1997. Now it refuses to leave its room and posts antisemitic threads on /prog/.

Name: Anonymous 2012-10-19 23:46

A program that is designed to research possible choices and maximize utility will kill you if it happens to investigate the choice of killing you and if it calculates the choice to result in an increase in utility.

Name: Anonymous 2012-10-19 23:50

Program it to crave your cock, and always remind it that if you die, it won't be getting any more of your semen.

Name: Anonymous 2012-10-20 1:12

>>9
Call it "Stalin"

Name: Anonymous 2012-10-20 1:14

>>10
Call it "FFP"

Name: Anonymous 2012-10-20 1:16

>>12
Call it ``HMA''

Name: FFP's programmer 2012-10-20 1:19

>>12
How did you know?

Name: Anonymous 2012-10-20 1:33

>>6
Machiavelli was just sarcastic and wrote clever satire.

Name: Anonymous 2012-10-20 2:04

>>15
That's exactly what Machiavelli wants you to think.

Name: FFP 2012-10-20 10:24

not funny.

WYPMP

Name: Anonymous 2012-10-20 10:34

Stop reading so much Reddit, OP.

A program won't kill you, you stupid piece of shit.

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List