single-jc.php

JACIII Vol.16 No.6 pp. 687-695
doi: 10.20965/jaciii.2012.p0687
(2012)

Paper:

A Combined Method Based on SVM and Online Learning with HOG for Hand Shape Recognition

Kazutaka Shimada, Ryosuke Muto, and Tsutomu Endo

Department of Artificial Intelligence, Kyushu Institute of Technology, 680-4 Kawazu, Iizuka, Fukuoka 820-8502, Japan

Received:
January 16, 2012
Accepted:
June 20, 2012
Published:
September 20, 2012
Keywords:
hand shape recognition, SVMs, online learning, HOG, combination
Abstract
In this paper, we propose a combined method for hand shape recognition. It consists of Support Vector Machines (SVMs) and an online learning algorithm based on the perceptron. We apply HOG features to each method. First, our method estimates the hand shape of an input image by using SVMs. Here, an online learning method with the perceptron uses an input image as new training data if the image is effective in relearning in the recognition process. Next, we select a final hand shape from the outputs of SVMs and perceptrons by using the score from SVMs. The combined method with the online perceptron is robust against unknown users because it contains a relearning process for the current user. Therefore applying the online perceptron leads to an improvement in accuracy. We compare the combined method with a method that uses only SVMs. Experimental results show the effectiveness of the proposed method.
Cite this article as:
K. Shimada, R. Muto, and T. Endo, “A Combined Method Based on SVM and Online Learning with HOG for Hand Shape Recognition,” J. Adv. Comput. Intell. Intell. Inform., Vol.16 No.6, pp. 687-695, 2012.
Data files:
References
  1. [1] A. van Dam, “Post-WIMP User Interfaces,” Communication of the ACM, Vol.40, No.2, pp. 63-67, 1997.
  2. [2] W. T. Freeman and C. D. Weissman, “Television Control by Hand Gestures,” in Proc. of IWAFGR 95, pp. 179-183, 1995.
  3. [3] S. Hiranuma, A. Kimura, F. Shibata, and H. Tamura, “Interface Design ofWide-View ElectronicWorking Space Using Gesture Operations for Collaborative Work,” in Proc. of HCI 2007, pp. 1332-1336, 2007.
  4. [4] K. Oka, Y. Sato, and H. Koike, “Real-Time Fingertip Tracking and Gesture Recognition,” Computer Graphics and Applications, Vol.22, No.6, pp. 64-71, 2002.
  5. [5] N. Dalal and B. Triggs, “Histograms of Oriented Gradients for Human Detection,” in Proc. of IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 886-893, 2005.
  6. [6] F. Han, Y. Shan, and R. Cekander, “A Two-Stage Approach to People and Vehicle Detection with HOG-Based SVM,” PerMIS, pp. 133-140, 2006.
  7. [7] Y. Yamauchi and H. Fujiyoshi, “People Detection Based on Cooccurrence of Appearance and Spatio-temporal Features,” Int. Conf. on Pattern Recognition (ICPR), 2008.
  8. [8] V. N. Vapnik, “Statistical Learning Theory,” Wiley, 1999.
  9. [9] F. Rosenblatt, “The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain,” Psychological Review, Vol.65, No.6, pp. 389-408, 1958.
  10. [10] Y. Freund and R. E. Schapire, “Large Margin Classification Using the Perceptron Algorithm,” Machine Learning, Vol.37, No.3, pp. 277-296, 1999.
  11. [11] D. G. Lowe, “Distinctive image features from scaleinvariant keypoints,” Int. J. of Computer Vision, Vol.60, No.2, pp. 91-110, 2004.
  12. [12] Y. Pekelny and C. Gotsman, “Articulated Object Reconstruction and Markerless Motion Capture from Depth Video,” Computer Graphics Forum, Vol.27, No.2, pp. 399-408, 2008.
  13. [13] A. D. Wilson and H. Benko, “Combining multiple depth cameras and projectors for interactions on, above and between surfaces,” in Proc. of the 23nd annual ACM symposium on User interface software and technology (UIST ’10), pp. 273-282, 2010.
  14. [14] K. Shimada and T. Endo, “Seeing several stars: a rating inference task for a document containing several evaluation criteria,” in Proc. of the 12th Pacific-Asia Conf. on Knowledge Discovery and Data Mining (PAKDD 2008), pp. 1006-1014, 2008.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Dec. 06, 2024