JACIII Vol.11 No.1 pp. 87-95
doi: 10.20965/jaciii.2007.p0087


An Incremental Neural Network for Online Supervised Learning and Topology Learning

Youki Kamiya*, Shen Furao**, and Osamu Hasegawa**,***

*Department of Computational Intelligence and Systems Science, Tokyo Institute of Technology, R2-52, 4259 Nagatsuta, Midori-ku, Yokohama 226-8503, Japan

**Imaging Science and Engineering Lab., Tokyo Institute of Technology

***PRESTO, Japan Science and Technology Agency (JST)

January 30, 2006
May 19, 2006
January 20, 2007
self-organizing, incremental, online supervised learning, topology learning
A new self-organizing incremental network is designed for online supervised learning. During learning of the network, an adaptive similarity threshold is used to judge if new nodes are needed when online training data are introduced into the system. Nodes caused by noise are deleted to decrease the misclassification. The proposed network, which is robust to noisy training data, suits the following tasks: (1) online or even life-long supervised learning; (2) incremental learning, i.e., learning new information without destroying old learned information; (3) learning without any predefined optimal condition; (4) representing the topology structure of inputting online data; and (5) learning the number of nodes needed to represent every class. Experiments of artificial data and high-dimension real-world data show that the proposed method can achieve classification with a high recognition ratio, high speed, and low memory.
Cite this article as:
Y. Kamiya, S. Furao, and O. Hasegawa, “An Incremental Neural Network for Online Supervised Learning and Topology Learning,” J. Adv. Comput. Intell. Intell. Inform., Vol.11 No.1, pp. 87-95, 2007.
Data files:
  1. [1] B. Santosa, T. B. Trafalis, and T. Conway, “Knowledge Base-Clustering and Application of Multi-Class SVM for Genes Expression Analysis,” ASME Press, 2002.
  2. [2] L. P. Li, C. R. Weinberg, T. A. Darden, and L. G. Pedersen, “Gene selection for sample classification based on gene expression data: study of sensitivity to choice of parameters of the GA/KNN method,” Bioinformatics, Vol.17, No.12, pp. 1131-1142, 2001.
  3. [3] Y. Freund and R. E. Schapire, “A decision-theoretic generalization of online learning and an application to boosting,” Journal of Computer and System Sciences, Vol.55, No.1, pp. 119-139, 1997.
  4. [4] L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone, “Classification and Regression Trees,” Belmont, CA: Wadsworth, 1984.
  5. [5] M. A. Mendez, C. Hodar, C. Vulpe, M. Gonzalez, and V. Cambiazo, “Discriminant analysis to evaluate clustering of gene expression data,” Federation of European Biochemical Societies Letters, Vol.522, Nos.1-3, pp. 24-28, 2002.
  6. [6] T. Kohonen, “Self-organization and associative memory,” Springer, Berlin, 3rd edition, 1989.
  7. [7] T. Kohonen, “Self-organized formation of topologically correct feature maps,” Biological Cybernetics, Vol.43, pp. 59-69, 1982.
  8. [8] R. Feraud and R. Clerot, “A methodology to explain neural network classification,” Neural Networks, Vol.15, No.2, pp. 237-246, 2002.
  9. [9] T. M. Cover and P. E. Hart, “Nearest neighbor pattern classification,” IEEE Trans. on Information Theory, Vol.IT-13, No.1, pp. 21-27, 1967.
  10. [10] T. Shibata, T. Kato, and T. Wada, “K-D decision tree: An accelerated and memory efficient nearest neighbor classifier,” MIRU2004.
  11. [11] B. V. Dasarathy, “Minimal consistent set (MCS) identification for optimal nearest neighbor decision systems design,” IEEE Trans. Syst. Man Cybern., Vol.24, No.3, pp. 511-517, 1994.
  12. [12] G. W. Gates, “The reduced nearest neighbor rule,” IEEE Trans. Inf. Theory, Vol.IT-18, No.3, pp. 431-433, 1972.
  13. [13] S. Furao and O. Hasegawa, “An incremental neural network for non-stationary unsupervised learning,” International Conference on Neural Information Processing (ICONIP2004), Calcutta, India, 2004.
  14. [14] B. Fritzke, “A Growing Neural Gas Network Learns Topologies,” In Advances in Neural Information Processing Systems, Vol.7, pp. 625-632, 1995.
  15. [15] S. Furao and O. Hasegawa, “An On-line Learning Mechanism for Unsupervised Classification and Topology Representation,” IEEE International Conference on Computer Vision and Pattern Recognition (CVPR05), San Diego, 2005 (accepted).
  16. [16] T. M. Martinetz, “Competitive Hebbian learning rule forms perfectly topology preserving maps,” ICANN, pp. 427-434, 1993.
  17. [17] C. Merz and M. Murphy, “UCI repository of machine learning databases, Irvine, CA,” University of California Department of Information, 1996.˜mlearn/MLRepository.html
  18. [18] A. Passerini, M. Pontil, and P. Frasconi, “From Margins to Probabilities in Multiclass Learning Problems,” in F. van Harmelen (ed.), Proc. 15th European Conf. on Artificial Intelligence, 2002.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Jun. 03, 2024