single-jc.php

JACIII Vol.14 No.2 pp. 167-178
doi: 10.20965/jaciii.2010.p0167
(2010)

Paper:

Fuzzy few-Nearest Neighbor Method with a Few Samples for Personal Authentication

Yoshinori Arai*1, Nguyen Thi Huong Lien*2,*3, Kazuma Ishigaki*2,*4,
Hiroyuki Satoh*5, Teruhiko Hayashi*5, Fangyan Dong*2,
and Kaoru Hirota*2

*1Dept. Eng., C.S., Tokyo Polytechnic Univ.

*2Dept. C.I. & S.S., Tokyo Institute of Technology

*3Schlumberger K.K.

*4Hitachi Automotive Systems Co., Ltd.

*5Soliton Systems K.K.

Received:
August 9, 2009
Accepted:
September 25, 2009
Published:
March 20, 2010
Keywords:
fuzzy set, k-nearest neighbor, instance-based learning, personal authentication
Abstract
The Fuzzy few-Nearest Neighbor (Ff-NN) method, which is an extended version of k-Nearest Neighbor algorithm (k-NN) and one of case-based learning methods, is proposed. Ff-NN intends to achieve stable identification performance even if the number of learning samples is as small as two. Applied to personal authentication systems such as enter/exit authorizations, Ff-NN reduces the user dictionary creation burden. Using 26 kinds of feature (face images and voices) data from 66 test objects, we conducted experiments on a PC to verify the feasibility of our proposed method. Forced recognition rate of conventional single-NN is 79.2% (standard deviation 2.83), and that of Ff-NN is 87.6% (SD 1.97). Recognition rates of dictionary data with 14, 17, and 26 features, are 90.6%, 92.5%, and 97.5%, respectively. We collect a very small number of nonintrusive samples so that two or more features are used to improve recognition performance. We present applicability of this method to personal authentication systems through experiments using 66 registrants, corresponding to 30 households.
Cite this article as:
Y. Arai, N. Lien, K. Ishigaki, H. Satoh, T. Hayashi, F. Dong, and K. Hirota, “Fuzzy few-Nearest Neighbor Method with a Few Samples for Personal Authentication,” J. Adv. Comput. Intell. Intell. Inform., Vol.14 No.2, pp. 167-178, 2010.
Data files:
References
  1. [1] T. Aibara, N. Takeda, and K. Ogura, “Experimental Comparison of KLT and DCT for Recognizing Human Frontal Faces,” IEICE trans. on Inf. & Syst., Vol.J78-D-2, No.5, pp. 870-873, May 1995.
  2. [2] T. Hirayama, Y. Iwai, and M. Yachida, “FACELOCK Lock Control Security System Using Face Recognition,” IEEJ Tran. on EIS, Vol.124, No.3, pp. 784-797, 2004.
  3. [3] X. Tan, S. Chen, Z. Zhou, and F. Zhang, “Face recognition from a single image per person: A survey,” Pattern Recognition, Vol.39, pp. 1725-1745, 2006.
  4. [4] R. Ishiyama, M. Hamanaka, and S. Sakamoto, “Face Recognition under Variable Pose and Illumination Conditions Using 3D Facial Appearance Models,” IEICE trans. on Inf. & Syst., Vol.J88-D-2, No.10, pp. 2069-2080, Oct. 2005.
  5. [5] H. Imaoka and S. Sakamoto,“Pose-Independent Face Recognition Method,” Technical Report of IEICE, PRUM99-26, pp. 51-58, Jun. 1999.
  6. [6] S. Malassiotis and M. G. Strintzis “Robust Face Recognition using 2D and 3D Data: Pose and Illumination Compensation,” Pattern Recognition, Vol.39, Issue 4, pp. 2537-2548, Apr. 2006.
  7. [7] K. Cho, T. Ichimaru, and Y. Yamashita, “Speech Recognition Using Inter-Phoneme Dependency Based on a Speaker Space Model,” IEICE trans. on Inf. & Syst., Vol.J87-D-2, No.7, pp. 1402-1408, Jul. 2004.
  8. [8] T. Cover and P. Hart, “Nearest neighbor pattern classification,” IEEE Trans. on Information Theory, Vol.13, No.3, pp. 21-27, 1967.
  9. [9] S. Salzberg, “A nearest hyperrectangle learning method,” Machine Learning, Vol.6, pp. 251-276, 1991.
  10. [10] D. Wettschereck, T. G. Dietterich, and R. Sutton, “An experimental comparison of the nearest-neighbor and nearest-hyperrectangle algorithms,” Machine Learning, Vol.9, pp. 5-28, 1995.
  11. [11] R. Mine, M. Seki, H. Ikeda, S. Watanabe, and H. Sako, “Character Recognition Using Credibility Based on Distance Measure,” Technical Report of IEICE, PRMU2004-39, Vol.104, No.125, pp. 37-42, 2004.
  12. [12] B. Duc, E. S. Bigun, J. Bigun, G. Maitre, and S. Fischer, “Fusion of audio and video information for multi modal person authentication,” Pattern Recognition Letters, Vol.18, No.9, pp. 835-843, 1997.
  13. [13] T. Hirose, K. Iwano, and S. Furui, “Multi-modal Speaker Verification using Speech and Face Images,” Reports of Acoustical Society of Japan, Vol.1, pp. 107-108, 2003.
  14. [14] K. Iwano, T. Hirose, E. Kamibayashi, and S. Furui, “Multi-Modal Person Authentication Using Speech and Ear Images,” Technical Report of IEICE, SP2003-30, Vol.103, No.94, pp. 25-30, 2003.
  15. [15] A. Ross and A. K. Jain, “MULTIMODAL BIOMETRICS: AN OVERVIEW,” Proc. of 12th European Signal Processing Conf. (EUSIPCO), pp. 1221-1224, Sept. 2004.
  16. [16] J. M. Keller, M. R. Gray, and J. A. Givens, Jr. “A fuzzy K-Nearest Neighbor Algorithm,” IEEE Trans. on SMC, Vol.SMC-15, No.4, pp. 580-585, 1985.
  17. [17] L. I. Kuncheva and J. C. Bezdek, “A fuzzy generalized nearest prototype classifier,” Proc. Of 7th IFSA World Congress, Vol.III, pp. 217-222, 1997.
  18. [18] R. Murata, Y. Endo, and S. Miyamoto, “Extension of k-nearest neighbor classification using fuzzy relation,” SCIS&ISIS2004, CDROM WP-3-2, 2004.
  19. [19] S. Akamatsu, “Computer Recognition of Human Face : A Survey,” IEICE trans. on Inf. & Syst., Vol.J80-D-2, No.8, pp. 2031-204, 1997.
  20. [20] K. Hirota, “The bounded variation quantity (B.V.Q.) and its application to feature extraction,” Pattern Recognition, Vol.15, No.2, pp. 93-101, 1982.
  21. [21] L. T. H. Nguyen, Y. Arai, F. Dong, and K. Hirota, “A Speaker Identification Method based on Trapezoid Fuzzy Similarity,” Technical Report of IEICE, SIS2007-82 (2008-3), pp. 7-10, 2008.
  22. [22] L. T. H. Nguyen, Y. Arai, H. Sato, T. Hayashi, F. Dong, and K. Hirota, “A Speaker Recognition Method based on Personal Identification Voice and Trapezoid Fuzzy Similarity,” SCIS&ISIS2008, pp. 1596-1601 (SA-D4-4), 2008.
  23. [23] L. T. H. Nguyen, F. Dong, Y. Arai, K. Hirota, H. Sato, and T. Hayashi, “A Speaker Recognition Method Based on Personal Identification Voice and Trapezoidal Fuzzy Similarity,” Cybernetics and Information Technologies, Vol.8, No.4, pp. 40-56, Sofia, 2008.
  24. [24] H. Hongo and K. Yamamoto, “Face and Feature Detection Using Skin Color and Motion,” J. of the Institute of Image Information and Television Engineers, Vol.52, No.12, pp. 1840-1847, 1998.
  25. [25] R. Brunelli and T. Poggio, “Face recognition: features versus templates,” Pattern Analysis and Machine Intelligence, IEEE Trans. on, Vol.15, Issue 10, pp. 1042-1052, 1993.
  26. [26] S. Morishima, Y. Yagi, M. Kaneko, H. Harashima, M. Yachida, and F. Hara, “Construction of Standard Software for Face Recognition and Synthesis,” Technical Report of IEICE, PRMU97-282, pp. 129-136, 1998.
  27. [27] X. Song, W. C. Lee, G. Xu, and S. Tsuji, “Extraction of Facial Organ Features Using Partial Feature Template and Global Constraints,” IEICE trans. on Inf. & Syst., Vol.J77-D-II, No.8, pp. 1601-1609, 1997.
  28. [28] T. Yokoyama, K. Tanaka, K. Hisatomi, Y. Yagi, M. Yachida, F. Hara, and S. Hashimoto, “Facial feature extraction for face recognition,” IPSJ SIG Technical Reports, SIG-CVIM, Vol.1999, No.3, pp. 121-128, 1999.
  29. [29] K. Fukui and O. Yamaguchi, “Facial Feature Point Extraction Method Based on Combination of Shape Extraction and Pattern Matching,” IEICE trans. on Inf. & Syst., Vol.J80-D-II, No.8, pp. 2170-2177, 1997.
  30. [30] J.Won, K. Morooka, and H. Nagahashi, “Distance Measurement of a Real-World Environment Using an Active Camera System,” Int. Workshop on Advanced Image Technology, pp. 163-168, 2006.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 18, 2024