single-rb.php

JRM Vol.21 No.6 pp. 739-748
doi: 10.20965/jrm.2009.p0739
(2009)

Paper:

User-Adaptable Hand Pose Estimation Technique for Human-Robot Interaction

Albert Causo*, Etsuko Ueda**, Kentaro Takemura*,
Yoshio Matsumoto***, Jun Takamatsu*, and Tsukasa Ogasawara*

*Nara Institute of Science and Technology (NAIST), 8916-5 Takayama-cho, Ikoma City, Nara 630-0192, Japan

**Nara Sangyo University 3-12-1 Tatsunokita, Sango-cho, Ikoma-gun, Nara 636-8503, Japan

***Intelligent Systems Institute, National Institute of Advanced Industrial Science and Technology, Tsukuba Central 2, 1-1-1 Umezono, Tsukuba, Ibaraki 305-8568, Japan

Received:
April 22, 2009
Accepted:
October 13, 2009
Published:
December 20, 2009
Keywords:
human-robot interaction, hand model calibration, vision-based hand pose estimation
Abstract
Hand pose estimation using a multi-camera system allows natural non-contact interfacing unlike when using bulky data gloves. To enable any user to use the system regardless of gender or physical differences such as hand size, we propose hand model individualization using only multiple cameras. From the calibration motion, our method estimates the finger link lengths as well as the hand shape by minimizing the gap between the hand model and observation. We confirmed the feasibility of our proposal by comparing 1) actual and estimated link lengths and 2) hand pose estimation results using our calibrated hand model, a prior hand model and data obtained from data glove measurements.
Cite this article as:
A. Causo, E. Ueda, K. Takemura, Y. Matsumoto, J. Takamatsu, and T. Ogasawara, “User-Adaptable Hand Pose Estimation Technique for Human-Robot Interaction,” J. Robot. Mechatron., Vol.21 No.6, pp. 739-748, 2009.
Data files:
References
  1. [1] J. Ueda, Y. Ishida, M. Kondo, and T. Ogasawara, “Development of the NAIST-Hand with Vision-based Tactile Fingertip Sensor,” Proc. of the IEEE Int. Conf. on Robotics and Automation, pp. 2332-2337, 2005.
  2. [2] A. Causo, E. Ueda, Y. Kurita, Y. Matsumoto, and T. Ogasawara, “Model-based Hand Pose Estimation Using Multiple Viewpoint Silhouette Images and Unscented Kalman Filter,” Proc. of the 17th Int. Symp. on Robot and Human Interactive Communication (RO-MAN 2008), pp. 291-296, 2008.
  3. [3] S. Waldherr, R. Romero, and S. Thrun, “A Gesture Based Interface for Human-Robot Interaction,” Autonomous Robots, 9-2, pp. 151-173, 2000.
  4. [4] J. Kofman, W. Xianghai, T.J. Luu, and S. Verma, “Teleoperation of a Robot Manipulator using a Vision-based Human-Robot Interface,” IEEE Trans. on Ind. Electron., 52-5, pp. 1206-1219, 2005.
  5. [5] L. Antón-Canalís, E. Sánchez-Nielsen, and M. Castrillón-Santana, “Fast and Accurate Hand Pose Detection for Human-Robot Interaction,” Lecture Notes in Computer Science LNCS 3522.
  6. [6] M. Hasanuzzaman, T. Zhang, V. Ampornaramveth, H. Gotoda, Y. Shirai, and H. Ueno, “Adaptive Visual Gesture Recognition for Human-Robot Interaction using a Knowledge-based Software Platform,” Robotics and Autonomous Systems, 55-8, pp. 643-657, 2007.
  7. [7] J.M. Rehg and T. Kanade, “Digiteyes: Vision-based Hand Tracking for Human-Computer Interaction,” Proc. of the Workshop on Motion of Non-Rigid and Articulated Bodies, pp. 16-22, 1994.
  8. [8] C. Hardenberg and F. Brard, “Bare-Hand Human-Computer Interaction,” Proc. of the ACM Workshop on Perceptive User Interfaces, pp. 1-8, 2001.
  9. [9] E. Ueda, Y. Matsumoto, M. Imai, and T. Ogasawara, “Hand Pose Estimation for Vision-based Human Interface,” IEEE Trans. on Ind. Electron., 50-4, pp. 676-684, 2003.
  10. [10] Y. Wu, J.Y. Lin, and T.S. Huang, “Capturing Natural Hand Articulation,” Proc. of Int. Conf. on Computer Vision, pp. 426-432, 2001.
  11. [11] B. Stenger, P.R.S. Mendonca, and R. Cipolla, “Model based 3D Tracking of an Articulated Hand,” Proc. of Conf. on Computer Vision and Pattern Recognition, 2, pp. 310-315, 1997.
  12. [12] M. Bray, E. Koller-Meier, and L.V. Gool, “Smart Particle Filtering for 3D Hand Tracking,” Proc. of the Sixth IEEE Int. Conf. on Automatic Face and Gesture Recognition, 675, 2004.
  13. [13] A. Erol, G. Bebis, M. Nicolescu, R.D. Boyle, and X. Twombly, “A Review on Vision-based Full DOF Hand Motion Estimation,” Proc. of Conf. on Computer Vision and Pattern Recognition, 3, p. 75, 2005.
  14. [14] R. Szeliski, “Rapid Octree Construction from Image Sequences,” CVGIP: Image Understanding, 58-1, pp. 23-32, 1993.
  15. [15] T. Kurihara and M. Miyata, “Modeling Deformable Human Hands From Medical Images,” Proc. of the 2004 ACM SIGGRAPH/Eurographics Symp. on Computer Animation, pp. 355-363, 2004.
  16. [16] T. Rhee, J.P. Lewis, U. Neumann, and K. Nayak, “Soft-Tissue Deformation for In Vivo Volume Animation,” Proc. of 15th Pacific Conf. on Computer Graphics and Applications, pp. 435-438, 2007.
  17. [17] Y. Hattori, A. Nakazawa, and H. Takemura, “Refinement of the Shape Reconstructed by Visual Cone Intersection using Fitting the Standard Human Model,” IPSJ SIG Notes CVIM, 31, pp. 147-154, 2007.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 22, 2024