This is the function to calculate the 'error' for the output layer:
ErrorOut = Output * (Activation_Potential)*(Target - Output)
Though the more correct wording would be delta.
This is the function for all the other layers
Error = Output * ActivationPotential * SUM(Weights * ErrorOut)
This is the one you are using.
//calculate errorsfor (int c = 0; c < Y.Length / 2; c++){ errors[c] = TS[a, 1, c] - Y[c, 0];}//last level (Y)for (int c = 0; c < Y.Length/2; c++){ Y[c, 1] = (float)(errors[c] * B * (1 - Math.Pow(Y[c, 0], 2)));}//then change the valuesfor (int d = 0; d < Y.Length/2; d++){ //error*Beta*(1-Y^2)*sum H3[c, 1] = (float)(B * (1-Math.Pow(Y[d,0], 2)) * sum);}
Now to be honest with you, this part of the code doesn't make any sense. You're using an array of inputs into the first layer to calculate your delta? Following on which you use the Beta to calculate Y, and then use the beta again to calculate the error? If I look at the equations, the activation levels from the previous level are used instead of the TS, and the Beta is not used in the calculation to Delta at all.
In pseudo code the Neural Network would probably look something like this:
//The following code belongs to a loop which goes through every sample in the traindata.for each node in InputLayer activationLevel[Node] = neuron(Weights(Node), TD(Node))endfor each node in otherLayers activationLevel[Node] = neuron(Weights(Node), activationLevel[prevNodes]) end//Output Layer only has one deltadeltas[outputLayer] = activationLevel[outNode] * (activationPotential) * (labels - activationLevel[outNode])for each node in lastHiddenLayer deltas[node] = activationLevel[node] * activationPotential * WeightsToOutput * deltas[outputLayer]end//The rest is pretty similar, you can work that out by yourself//After you get the deltas, you update the weights. Then you iterate through this loop //again for the next sample.
Since I only made that pseudo code quickly, there may be some mistakes in it, but you get the general idea. Hope this helps.