Back-propagation works in a logic very similar to that of feed-forward. The difference is the direction of data flow. In the feed-forward step, you have the inputs and the output observed from it. You can propagate the values forward to train the neurons ahead.
In the back-propagation step, you cannot know the errors occurred in every neuron but the ones in the output layer. Calculating the errors of output nodes is straightforward - you can take the difference between the output from the neuron and the actual output for that instance in training set. The neurons in the hidden layers must fix their errors from this. Thus you have to pass the error values back to them. From these values, the hidden neurons can update their weights and other parameters using the weighted sum of errors from the layer ahead.
A step-by-step demo of feed-forward and back-propagation steps can be found here.
Edit
If you're a beginner to neural networks, you can begin learning from Perceptron, then advance to NN, which actually is a multilayer perceptron.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…