Optimizing and Learning Algorithm for Feed-forward Neural Networks
Pilar Bachiller and Julia González
Department of Computer Science. University of Extremadura. Escuela Politécnica., 10.071. Cáceres. Spain
Received:October 20, 2000Accepted:December 10, 2000Published:January 20, 2001
Keywords:Feed-forward neural network, Hidden nodes, Network pruning, Orthogonal transformation, Training time
Feed-forward neural networks have emerged as a good solution for many problems, such as classification, recognition and identification, and signal processing. However, the importance of selecting an adequate hidden structure for this neural model should not be underestimated. When the hidden structure of the network is too large and complex for the model being developed, the network may tend to memorize input and output sets rather than learning relationships between them. Such a network may train well but test poorly when inputs outside the training set are presented. In addition, training time will significantly increase when the network is unnecessarily large and complex. Most of the proposed solutions to this problem consist of training a larger than necessary network, pruning unnecessary links and nodes and retraining the reduced network. We propose a new method to optimize the size of a feed-forward neural network using orthogonal transformations. This approach prunes unnecessary nodes during the training process, avoiding the retraining phase of the reduced network, which is necessary in most pruning techniques.
Cite this article as:P. Bachiller and J. González, “Optimizing and Learning Algorithm for Feed-forward Neural Networks,” J. Adv. Comput. Intell. Intell. Inform., Vol.5 No.1, pp. 51-57, 2001.Data files: