single-jc.php

JACIII Vol.14 No.4 pp. 325-343
doi: 10.20965/jaciii.2010.p0325
(2010)

Paper:

An Extension Approach for Neural Networks by Introducing a Nearest Neighbor Algorithm in Relative Coordinates

Hirofumi Suzaki* and Satoru Kuhara**

*Department of Bioinformatics, Graduate School of Systems Life Sciences, Kyushu University, 6-10-1 Hakozaki, Higashi-ku, Fukuoka 812-8581, Japan

**Faculty of Agriculture, Kyushu University, 6-10-1 Hakozaki, Higashi-ku, Fukuoka 812-8581, Japan

Received:
July 30, 2009
Accepted:
December 25, 2009
Published:
May 20, 2010
Keywords:
neural network, nearest neighbor algorithm, network inference, curse of dimensionality, generalized mean
Abstract
Computational models known as neural networks discriminate among different types of nonlinear data, enabling the design of flexible calculation through machine-learning algorithms. Thanks to the simplicity of calculation, the nearest neighbor algorithm is a well-studied classification method. If the nearest neighbor algorithm inference is shown by a network model consisting of a neuron model representing data, it may become deterministic with adjustable parameters. We propose a new neuron model using the generalized mean and have designed a practical neural network framework based on the nearest neighbor algorithm. Because our proposed parallel distributed processing model is not simply a distance comparison between two points, it uses information from a whole body of data. This makes our classification superior to the nearest neighbor algorithm for algorithmic principles.
Cite this article as:
H. Suzaki and S. Kuhara, “An Extension Approach for Neural Networks by Introducing a Nearest Neighbor Algorithm in Relative Coordinates,” J. Adv. Comput. Intell. Intell. Inform., Vol.14 No.4, pp. 325-343, 2010.
Data files:
References
  1. [1] W. S. McCulloch and W. Pitts, “A logical calculus of the ideas immanent in nervous activity,” Bulletin of Mathematical Biophysics, Vol.5, pp. 115-133, 1943.
  2. [2] F. Rosenblatt, “Principles of Neurodynamics,” Spartan Books, 1961.
  3. [3] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning internal representations by error propagation,” Parallel Distributed Processing, MIT Press, Vol.1, pp. 318-362, 1986.
  4. [4] V. N. Vapnik, “The Nature of Statistical Learning Theory,” Springer-Verlag, 1995.
  5. [5] T. M. Cover and P. E. Hart, “Nearest neighbor pattern classification,” IEEE Trans. on Information Theory, Vol.13, No.1, pp. 21-27, 1967.
  6. [6] K. Fukunaga and L. D. Hostetler, “K-nearest-neighbor bayes-risk estimation,” IEEE Trans. on Information Theory, Vol.21, No.3, pp. 285-293, 1975.
  7. [7] K. Fukunaga and D. M. Hummels, “Bias of nearest neighbor error estimates,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.9, No.1, pp. 103-112, 1987.
  8. [8] S. A. Dudani, “The distance-weighted k-nn rule,” IEEE Trans. on Systems, Man, and Cybernetics, Vol.6, No.4, pp. 325-327, 1976.
  9. [9] T. Bailey and A. K. Jain, “A note on distance-weighted k-nearest neighbor rules,” IEEE Trans. on Systems, Man, and Cybernetics, Vol.8, No.4, pp. 311-313, 1978.
  10. [10] A. Jozwik, “A learning scheme for a fuzzy k-nn rule,” Pattern Recognition Letters, Vol.1, pp. 287-289, 1983.
  11. [11] J. M. Keller, M. R. Gray, and J. A. Givens, Jr, “A fuzzy k-nearest neighbor algorithms,” IEEE Trans. on Systems, Man, and Cybernetics, Vol.15, No.4, pp. 580-585, 1985.
  12. [12] Y. H. Park and S. Y. Bang, “A new neural network model based on nearest neighbor classifier,” 1991 IEEE Int. Joint Conf. on Neural Networks, Vol.3, pp. 2386-2389, 1991.
  13. [13] M. K. Muezzinoglu and J. M. Zurada, “A recurrent RBF network model for nearest neighbor classification,” IJCNN ’05. Proc. 2005 IEEE Int. Joint Conf. on Neural Networks, Vol.1, pp. 343-348, 2005.
  14. [14] F. Shen and O. Hasegawa, “A fast nearest neighbor classifier based on self-organizing incremental neural network,” Neural Networks, Vol.21, No.10, pp. 1537-1547, 2008.
  15. [15] S. Z. Li and J. Lu, “Face recognition using the nearest feature line method,” IEEE Trans. on Neural Networks, Vol.10, No.2, pp. 439-443, 1999.
  16. [16] J. T. Chien and C. C. Wu, “Discriminant waveletfaces and nearest feature classifiers for face recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.24, No.12, pp. 1644-1649, 2002.
  17. [17] S. Z. Li, K. L. Chan, and C. L. Wang, “Performance evaluation of the nearest feature line method in image classification and retrieval,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.22, No.11, pp. 1335-1339, 2000.
  18. [18] S. Z. Li, “Content-based audio classification and retrieval using the nearest feature line method,” IEEE Trans. on Speech and Audio Processing, Vol.8, No.5, pp. 619-625, 2000.
  19. [19] K. Chen, T. Y. Wu, and H. J. Zhang, “On the use of nearest feature line for speaker identification,” Pattern Recognition Letters, Vol.23, No.14, pp. 1735-1746, 2002.
  20. [20] A. Asuncion and D. J. Newman, “UCI Machine Learning Repository,”
    http://www.ics.uci.edu/∼mlearn/MLRepository.html
    University of California, Irvine, School of Information and Computer Sciences, 2007.
  21. [21] M. Sakauchi, Y. Ohsawa, M. Sone, and M. Onoe, “Management of the standard image database for image processing researches (SIDBA),” ITEJ Technical Report, Vol.8, No.38, pp. 7-12, 1984.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Dec. 06, 2024