Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon. Entire thread

Artificial General Intelligence

Name: (λ λ λ) 2012-07-24 2:19

Many questions, few answers.

• Under what circumstances is a synapse between two neurons formed?  How does this effect vary with distance?  Since a synapse is generally unidirectional, what determines the direction?
• Under what circumstances is a negative-weight synapse between two neurons formed? (This one is far more difficult to answer.)
• In the early stages of human development, what aids the formation of the complex sensory input processing networks (e.g. vision)?  Is it the boosted number of synapses?  Is the structure of these purpose-specific networks actually encoded in the DNA or do they form naturally as a result of the sensory input?
• How does the number of neurons and synapses required to drive a sentient (artificial) brain vary with the amount of sensory input (e.g. how many neurons do you need for 128x128 (mapped like the human eye) stereoscopic colour vision)?
• How many neurons and synapses are required for the mastery of a natural language?
• What kind of (virtual) environment would be the most appropriate for a AGI to "grow" in?
• The human brain also uses chemical signalling for things like pain and emotion; are these things actually necessary for the development of a sentient being (this question is quite warranted since pain is a very quick path to self-awareness since it triggers self-preservation)?
• What is the simplest (computationally speaking) model that accounts for the complexity of the brain and approximates its functioning?

Name: Anonymous 2012-07-24 2:29

0 fucking bits

Name: Anonymous 2012-07-24 2:50

>>1
youre making a fundamental error that all brain information is processed and stored as electrical impulses between synapses

Name: Anonymous 2012-07-24 3:05

>>3
Well, synapses are generally chemical instead of electrical (otherwise you wouldn't have negative weighting); but leaving that aside, isn't all memory stored as just synapse weights between neurons?

Name: Anonymous 2012-07-24 3:25

>>4
isn't all memory stored as just synapse weights between neurons?
no, most of the brain is redundant, there is very little involved in the network between synapses, much of it could be removed

Name: Anonymous 2012-07-24 3:37

>>5
Okay, operating under that assumption, then where/how is the brain information stored and processed then?

Name: Anonymous 2012-07-24 4:20

>>1
Those are the questions that need to be figured out.

Watch this:

http://www.youtube.com/watch?v=_rPH1Abuu9M

Name: Anonymous 2012-07-24 4:25

>>7
Thanks, will be watching those shortly.

Name: Anonymous 2012-07-24 5:42

Neural nets suck so much cock it's unbelievable. They take eons to converge only to come up with a program that solves the problem so shittily that I would have made a better program in 5 minutes using the analytic approach.

Name: Anonymous 2012-07-24 5:44

>>9
Your brain is a neural net, and I bet you didn't learn programming in five minutes.

Name: Anonymous 2012-07-24 7:51

To OP, why are you asking how many neurons are needed when you can just test it?

I learned this strategy upon realizing that the more neurons, the longer the convergence time: Start with 1 neuron and 1 layer, and gradually increase the amount of neurons and layers when trying to find better candidates. Because lots of neurons means slow convergence, the smaller networks will out-compete the larger ones, thus you will naturally end up with a small net that is just large enough to be smart enough to solve the problem.

When adding new neurons, have their weight initially be zero so that the evolution does not get stuck.

Name: Anonymous 2012-07-24 11:26

>>11
To OP, why are you asking how many neurons are needed when you can just test it?
Because the question is how many neurons are required to give the AI visual sensory input of a given resolution and make it sentient.  So far, the latter has never been achieved, so testing is a bit out of the question.  The question that needs to be answered before answering this question is how the fuck to put together and educate an AI to sentience.

Because lots of neurons means slow convergence, the smaller networks will out-compete the larger ones, thus you will naturally end up with a small net that is just large enough to be smart enough to solve the problem.
To solve the problem?  This thread is not about weak/applied AI, it's about strong AI (the kind that can think and learn on its own).

Name: Anonymous 2012-07-24 11:42

>>12

Everything that isn't instinctual is learned from experience. To have a program to be able to learn automatically, you need some sort of algedonic feedback loop, i.e. a judge or fitness function to decide if the program is doing well. But you probably already knew that. Strong AI is also just applied AI, it is just applied to do lots of things. Humans survive by forming concepts. The environment made us evolve to form concepts. The environment is humanity's judge. Sentience[1] is not necessary for survival/intelligence, it's just to introspect (you wouldn't be here if you couldn't, etc).

What you want is a program that can, by learning from experience, form concepts and abstract ideas, i.e. a program that behaves like a human. Good luck on that. No one really knows why the environment on Earth is like it is (suitable for sentient creatures that form concepts), but maybe you can figure it out.

1. ^ Here I assume by sentience you mean consciousness. The definitions of this word vary so much that even the definitions of the definitions will vary, so we will end up with conflicts, misunderstandings and disagreements.

Name: Anonymous 2012-07-24 23:03

>>13
you need some sort of algedonic feedback loop, i.e. a judge or fitness function to decide if the program is doing well.
That's part of the process of learning things, and that would work fine in early stages of the AI's development (way before sentience, that is).  But some things the brain learns to do don't obey that; for instance, the neurons that handle processing of sensory input have no way of 'knowing' whether they are structuring themselves the right way... or maybe they do; if they structure themselves the right way, the brain can indirectly use the sensory input reliably to get something it wants (positive feedback).

Name: Anonymous 2012-07-25 1:22

>>14
They don't know. The neurons that handle sensory input have gone through millennia of natural selection, the configurations that weren't structured the right way simply died out.

Name: Anonymous 2012-07-25 1:43

>>15
So you are saying that the structure of the links between the neurons that handle sensory input is hardcoded in the DNA, as opposed to arising naturally from correlated stimuli?

Name: Anonymous 2012-07-25 2:29

>>16
It's really both. The genes play an important role in defining their general structure, which is then improved on by environmental factors and ``need'' by (for example) developing ``optimized'' pathways for common activation patterns.
Of course said genes can and do change, but in the wild ``bad'' genes (i.e. can't see, can't hear, can't move) don't get the chance to reproduce.
I don't think you can get a strong AI without reproduction and some kind of selection on the ``offspring''.
I'm by no means an expert in biology, genetics, or AI, though. These are just random bits of knowledge I collected during the years out of curiosity.

Name: Anonymous 2012-07-25 2:59

>>17
I don't think you can get a strong AI without reproduction and some kind of selection on the ``offspring''.
Chicken and egg.

Name: Anonymous 2012-07-25 3:23

Name: Anonymous 2012-07-25 5:16

DO YOUR OWN HOMEWORK

Name: Anonymous 2012-07-25 5:17

>>20
I would gladly participate in any course in which these questions offered as a homework.

Name: Anonymous 2012-07-25 8:17

Intelligent dubs

Newer Posts
Don't change these.
Name: Email:
Entire Thread Thread List