single-jc.php

JACIII Vol.5 No.1 pp. 51-57
doi: 10.20965/jaciii.2001.p0051
(2001)

Paper:

Optimizing and Learning Algorithm for Feed-forward Neural Networks

Pilar Bachiller and Julia González

Department of Computer Science. University of Extremadura. Escuela Politécnica., 10.071. Cáceres. Spain

Received:
October 20, 2000
Accepted:
December 10, 2000
Published:
January 20, 2001
Keywords:
Feed-forward neural network, Hidden nodes, Network pruning, Orthogonal transformation, Training time
Abstract
Feed-forward neural networks have emerged as a good solution for many problems, such as classification, recognition and identification, and signal processing. However, the importance of selecting an adequate hidden structure for this neural model should not be underestimated. When the hidden structure of the network is too large and complex for the model being developed, the network may tend to memorize input and output sets rather than learning relationships between them. Such a network may train well but test poorly when inputs outside the training set are presented. In addition, training time will significantly increase when the network is unnecessarily large and complex. Most of the proposed solutions to this problem consist of training a larger than necessary network, pruning unnecessary links and nodes and retraining the reduced network. We propose a new method to optimize the size of a feed-forward neural network using orthogonal transformations. This approach prunes unnecessary nodes during the training process, avoiding the retraining phase of the reduced network, which is necessary in most pruning techniques.
Cite this article as:
P. Bachiller and J. González, “Optimizing and Learning Algorithm for Feed-forward Neural Networks,” J. Adv. Comput. Intell. Intell. Inform., Vol.5 No.1, pp. 51-57, 2001.
Data files:

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 22, 2024