>>1
They don't, really. They're just a bunch of things called neurons, which each take an input value and produce an output based on it. A neuron can (and presumably does) feed into one or more other neurons. Usually they form a very simple structure, such as a few neurons taking input values, which are wired to a hidden layer, the outputs of which connect to output neurons. Making this do anything useful is complicated, since it's not possible in general to predict what changes to neuron weights or wiring will do. The more hidden layers you have, the harder it gets. The more complicated your wiring gets, the more complicated it gets.
If you can't understand the basic components of a neural network you're kind of dumb, but if you're just not sure what to do with them, rest assured you're not alone. Neural networks are dodgy things.