Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

Artificial Intelligence

Name: Anonymous 2010-01-22 14:53

Whenever people talk about artificial intelligence, all they can say is that "it's possible" but "it's not there yet." In this thread, I propose we discuss our various philosophies and ideas for implementation of the various forms of artificial intelligence. After all, it's an interesting topic that deserves more than a wave of the hand.

There are many types of theoretical artificial intelligences. Let us define our first type as that of a "superhuman." This type is designed to think in a manner similar to humans, but faster, with different kinds of interests, and the possibility of death significantly reduced. To create something like this, we need to look at the human brain.

The human brain experiences emotions. How it responds to these emotions is what defines us as intelligent creatures. Most people seem to be under the misconception that human emotions are something that can never be replicated in a machine; it's a series of chemical reactions, you can't induce that in a computer, they say. I, however, believe that this is incorrect, and not for the reason of "you can simulate the atoms in our universe and stick a brain in the simulation, and theoretically it will think just like a man."

Human emotions are actually more simple to understand than one might realize. Emotions are simply different levels of worrying. When you feel relaxed, you feel good because your mind isn't focused on anything important. When you feel accomplished, you feel good because your mind was focused on some important problem, but you are now secure in the knowledge that you don't have to worry about it anymore. The feeling of "working towards something" is a matter of knowing a problem and worrying about it, but being secure in the knowledge that you are doing something about it. Likewise, one feels frustration when they are trying to work towards something, but what they are doing isn't helping. One feels hollow when they have nothing good to worry about. The list goes on and on, but I think you get the point: emotions are defined as different levels of worrying.

Of course, you are all aware of the concept of the conscious and subconscious. The subconscious is simply a device that interprets a situation and sets a certain mood. The conscious then responds in a manner according to whatever the mood currently is. The important thing, however, is that the conscious has little to no knowledge of the workings of the subconscious; a black-box, if you will. The subconscious, on the other hand, understands everything about the conscious. It is this inability to ultimately control our emotions (and thus, our subconscious) that defines us as human.

Likewise, if you wish to create a man-like, self-centric artificial intelligence, you must simply give it:

1. The ability to rationalize and react
2. Different moods that affect the way it reacts
3. A device that controls these moods in a manner that is transparent to the intelligence itself

Once you give the intelligence these requirements, a real personality will spring up; after all, that which defines unique personalities is how the creature's subconscious responds to situations and what situations activate which moods.

When you look at the situation this way, the things standing between man and his child are not quite as abstract as commonly assumed to be.

Name: Anonymous 2010-01-23 21:38

You're not helping your argument. Your combat robot is going to kill you.

Name: Anonymous 2010-01-23 22:23

KILL MY ANUS

Name: Anonymous 2010-01-23 22:30

>>41
You're assuming that the combat mode would involve actual AI. A "kill unless" logic seems sufficient.

Name: Anonymous 2010-01-24 4:11

>>43
Actually, what I'm assuming is that your mouthbreathing bodyguard will not be on the ball to defend you from your "waifu" when she tears you a new one and leaves you to bleed out.

Name: Anonymous 2010-01-24 6:50

Jeff Hawkins on TED talks had a couple of somewhat interesting points:
http://www.youtube.com/watch?v=G6CVj5IQkzk

Name: Anonymous 2010-01-24 11:41

>>45
He insulted our reptilian overlords.

Name: Anonymous 2010-01-24 14:48

Hmm the problem of desires has probably never crossed my mind much in the context of AI - and it is indeed a serious one.

Human's desires and motivations can basically be broken down to a single rule the propagation and preservation of genes contained within every cell of its body... from this point of view sex might seem perverse - but if its the propagation of every one separate gene that is taken as a starting condition - the principle holds true.

This presumption seems to have no empirically observable statistically important contradictions.

Our intelligence is simply of by-product of evolution.

Predicting the future using abstraction and induction and carrying over of the principles of survival through a vivid system of communications in hopes of receiving some vital information back and using it in the future - that's what the human brain is about.

If all technical aspects were ignored, what kind of motivation would we bestow something we might call AI is truly a difficult question.

There is no absolute path in life which would be good or bad - not even as a limit of attainment - assessing a "shade of gray" is a generally arbitrary task.

What we might think to be "good" using our limited intelligence would with as a high probability of being a truly "successful mechanism" (to some arbitrary end), as failing miserably in the implied task at hand or ending ones own existence out of sheer desperation.

...

Perhaps a synthesis and a symbioses (at least for sometime) of technology and mind is a more realistic concept (as defined by the logic and the rules of the world known to me), than a stand-alone machine intellect, which... which is beyond true conception by the human mind anyway - just like magic it has its allure - but it will either be only an illusion of intellect like the analogues of the silly chatbots everyone tries to produce, or we will actually come up with something that will make itself much more complex than we imagined and which motives shall then be unknown to us, even if it was us that provided it with a set of starting rules (azimov's "I Robot" would probably be a good, albeit a simplistic and trivial example).

The latter possibility is really just a bit too far off... yet our brain simply obeys the laws of physics, which clearly  allow whatever we call "intelligence", so there's no good reason not to look at our society as the cells of a newborn - multiplying and evolving into tissues for a new organism.

Name: Anonymous 2010-01-24 16:33

Human's desires and motivations can basically be broken down to a single rule the propagation and preservation of genes contained within every cell of its body... from this point of view sex might seem perverse - but if its the propagation of every one separate gene that is taken as a starting condition - the principle holds true.
It does not.

Name: Anonymous 2010-01-24 17:53

>>45,46
Steve Grand is more interesting than Hawkins on this topic.

Hawkins has some interesting things going on with the cortex, but he's betting on one trick. Grand is more holistic, and would never say such a silly thing as "your body is just along for the ride." Hawkins doesn't address learning beyond memorization (to be fair he seems to consider them to be the same thing), but Grand has demonstrated internal drives creating learned behavior based on results. He also speaks highly of our reptilian overlords.

(Don't get me wrong; Hawkins has some important things to say, but if you start with his material you will be blind to what is missing. If you start with Grand's, you will know exactly what is missing--at which point you should read Hawkins.)

Name: Anonymous 2010-01-24 19:11

>>48

Does too.

Name: Anonymous 2010-01-26 22:27

o     \o    \o/   \o    o    <o     <o>    o>    o
.|.     |.    |     /    X     \      |    <|    <|>
/ \     >\   /<     >\  /<     >\    /<     >\   /<


BUMP

Name: Anonymous 2010-01-27 0:50

>>51
why would you bump this festering pile of robo-waifu-fetishism?

Name: Mature Related 2010-01-27 16:28

I think humans have to become intelligent first.

Name: Anonymous 2010-01-28 23:07

o     \o    \o/   \o    o    <o     <o>    o>    o
.|.     |.    |     /    X     \      |    <|    <|>
/ \     >\   /<     >\  /<     >\    /<     >\   /<


SAGE

Name: Anonymous 2010-01-28 23:07

>>54

fuck.

Name: Anonymous 2013-01-18 23:21

/prog/ will be spammed continuously until further notice. we apologize for any inconvenience this may cause.

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List