Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon.

Pages: 1-

Nerual Networks: Feed Forward Nets

Name: !id/tRanCE. 2010-10-12 22:19

I've got a problem with a network I'm trying to make and I'm sure the answer is blindingly obvious but I can't see it.

If you have done this before or even have an example that explains what I'm doing wrong, that would be great.

The problem is with training it i think...

Here is the network so far:

1     4     7
2     5     8
3     6     9

Nodes 1,2 & 3 are the input nodes, 4,5 & 6 are the hidden nodes and 7,8 & 9 are the output nodes. All the input nodes are linked by weights to all the hidden nodes and all the hidden nodes are linked to all the output nodes by weights.

So let's say i set the inputs to 0.75,0.1,0.1 and the desired outputs to 1,0,0. i can forward propagate and then backward propagate a bunch of times and the network will work properly.

But i want it to do this; if the inputs are 0.75,0.1,0.1 then the outputs should be 1,0,0. If the inputs are 0.1,0.75,0.1 then the outputs should be 0,1,0. If the inputs are 0.1,0.75,0.1 then the outputs should be 0.1,0.1,0.75.

What i've been doing to try and get this working is this:

loop(1000 times)
{
     set the inputs to 0.75,0.1,0.1;
     set the desired outputs to 1,0,0;
     forward propagate;
     backward propagate;
    
     set the inputs to 0.1,0.75,0.1;
     set the desired outputs to 0,1,0;
     forward propagate;
     backward propagate;

     set the inputs to 0.1,0.1,0.75;
     set the desired outputs to 0,0,1;
     forward propagate;
     backward propagate;
}

but all that happens is regardless of the inputs i put in, the outputs don't come close to what i want, they usually sit around 0.2,0.2,0.2.

i can get it working fine with only one output, but if i have more than one output and try to train the network to do two different things the whole thing breaks down.

Please help...

Name: Anonymous 2010-10-12 22:21

bitches don't know about my gauss-jordan elimination

Name: !id/tRanCE. 2010-10-12 22:23

uhh what?

Name: Anonymous 2010-10-12 22:35

you can model this by a system of linear equations, then solve it (NB there may be more than one solution)

Name: !id/tRanCE. 2010-10-12 22:48

i know i code it in manually. But I'm trying to get a neural network to learn how to do it.

Name: Anonymous 2010-10-13 0:41

>>1,3,5
Do your own homework Mr. GreenText

Name: Anonymous 2010-10-13 0:58

I have been and i can't figure out what i've done wrong. Which is why i'm asking here

Name: Anonymous 2010-10-13 1:52

>>7
i can't figure out what i've done wrong.
* Not saging.
* Typing like a 13-year-old imageboard reject.
* No [code] tags.

You're welcome.

Name: !id/tRanCE. 2010-10-13 2:38

>>8

It's my first time on this board. Thanks for being so inviting.

Name: Anonymous 2010-10-13 3:18

>>9
back to /b/, please

Name: Anonymous 2010-10-13 4:49

>>9
When I was first time on this board, I used sage and EXPERT BBCODE as any other ``faggot'' around here.  Get off my lawn.

Name: Anonymous 2010-10-13 5:13

>>11
Only faggots sage all the time. If a thread is relevant, it should be bumped.

Also, I bump because my posts are like mana from heaven, and I must ensure that they are seen by the plebians.

Name: Anonymous 2010-10-13 5:19

/prog/ culture is hostile. If you do as everyone else does, you'll fit in fine. Otherwise, dick-waving arguments will break out.

Name: Anonymous 2010-10-13 6:31

>>12
I am a faggot, thus putting sage in the email field is always valid.

Name: Anonymous 2010-10-13 8:07

All was answered in >>3,4 and >>10.

Name: Anonymous 2010-10-13 8:11

Also, >>1, don't listen to the ``back to /b/'' faggots. They're just tired of seeing yet another person come and ask for programming help. This is not a help board, it is a discussion board for programming. When we do help people, it's because we find the problem presented actually interesting to implement. No offense, but you're much better off posting your question to stackoverflow or whatever programming help board you might know.

Name: Anonymous 2010-10-13 9:06

Did you write your own NN code or are you working with a NN package of some kind? Getting 1.0 outputs usually means the activation function is suboptimal1.

If you wrote the code yourself the problem is interesting. If not, this is just a /prog/ version of "how do I computer?"

1. Usually, φ-1(1.0) = Inf

Don't change these.
Name: Email:
Entire Thread Thread List