Madaline

MADALINE (Many ADALINE[1]) is a three-layer (input, hidden, output), fully connected, feed-forward artificial neural network architecture for classification that uses ADALINE units in its hidden and output layers, i.e. its activation function is the sign function.[2] The three-layer network uses memistors. Three different training algorithms for MADALINE networks, which cannot be learned using backpropagation because the sign function is not differentiable, have been suggested, called Rule I, Rule II and Rule III. The first of these dates back to 1962 and cannot adapt the weights of the hidden-output connection.[3] The second training algorithm improved on Rule I and was described in 1988.[1] The third "Rule" applied to a modified network with sigmoid activations instead of signum; it was later found to be equivalent to backpropagation.[3]

The Rule II training algorithm is based on a principle called "minimal disturbance". It proceeds by looping over training examples, then for each example, it:

Additionally, when flipping single units' signs does not drive the error to zero for a particular example, the training algorithm starts flipping pairs of units' signs, then triples of units, etc.[1]

See also

References

  1. 1 2 3 Rodney Winter; Bernard Widrow (1988). MADALINE RULE II: A training algorithm for neural networks (PDF). IEEE International Conference on Neural Networks. pp. 401–408. doi:10.1109/ICNN.1988.23872.
  2. Youtube: widrowlms: Science in Action (Madaline is mentioned at the start and at 8:46)
  3. 1 2 Widrow, Bernard; Lehr, Michael A. (1990). "30 years of adaptive neural networks: perceptron, madaline, and backpropagation". Proceedings of the IEEE 78 (9): 1415–1442. doi:10.1109/5.58323.
This article is issued from Wikipedia - version of the Saturday, August 29, 2015. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.