Paper:
User-Adaptable Hand Pose Estimation Technique for Human-Robot Interaction
Albert Causo*, Etsuko Ueda**, Kentaro Takemura*,
Yoshio Matsumoto***, Jun Takamatsu*, and Tsukasa Ogasawara*
*Nara Institute of Science and Technology (NAIST), 8916-5 Takayama-cho, Ikoma City, Nara 630-0192, Japan
**Nara Sangyo University 3-12-1 Tatsunokita, Sango-cho, Ikoma-gun, Nara 636-8503, Japan
***Intelligent Systems Institute, National Institute of Advanced Industrial Science and Technology, Tsukuba Central 2, 1-1-1 Umezono, Tsukuba, Ibaraki 305-8568, Japan
- [1] J. Ueda, Y. Ishida, M. Kondo, and T. Ogasawara, “Development of the NAIST-Hand with Vision-based Tactile Fingertip Sensor,” Proc. of the IEEE Int. Conf. on Robotics and Automation, pp. 2332-2337, 2005.
- [2] A. Causo, E. Ueda, Y. Kurita, Y. Matsumoto, and T. Ogasawara, “Model-based Hand Pose Estimation Using Multiple Viewpoint Silhouette Images and Unscented Kalman Filter,” Proc. of the 17th Int. Symp. on Robot and Human Interactive Communication (RO-MAN 2008), pp. 291-296, 2008.
- [3] S. Waldherr, R. Romero, and S. Thrun, “A Gesture Based Interface for Human-Robot Interaction,” Autonomous Robots, 9-2, pp. 151-173, 2000.
- [4] J. Kofman, W. Xianghai, T.J. Luu, and S. Verma, “Teleoperation of a Robot Manipulator using a Vision-based Human-Robot Interface,” IEEE Trans. on Ind. Electron., 52-5, pp. 1206-1219, 2005.
- [5] L. Antón-Canalís, E. Sánchez-Nielsen, and M. Castrillón-Santana, “Fast and Accurate Hand Pose Detection for Human-Robot Interaction,” Lecture Notes in Computer Science LNCS 3522.
- [6] M. Hasanuzzaman, T. Zhang, V. Ampornaramveth, H. Gotoda, Y. Shirai, and H. Ueno, “Adaptive Visual Gesture Recognition for Human-Robot Interaction using a Knowledge-based Software Platform,” Robotics and Autonomous Systems, 55-8, pp. 643-657, 2007.
- [7] J.M. Rehg and T. Kanade, “Digiteyes: Vision-based Hand Tracking for Human-Computer Interaction,” Proc. of the Workshop on Motion of Non-Rigid and Articulated Bodies, pp. 16-22, 1994.
- [8] C. Hardenberg and F. Brard, “Bare-Hand Human-Computer Interaction,” Proc. of the ACM Workshop on Perceptive User Interfaces, pp. 1-8, 2001.
- [9] E. Ueda, Y. Matsumoto, M. Imai, and T. Ogasawara, “Hand Pose Estimation for Vision-based Human Interface,” IEEE Trans. on Ind. Electron., 50-4, pp. 676-684, 2003.
- [10] Y. Wu, J.Y. Lin, and T.S. Huang, “Capturing Natural Hand Articulation,” Proc. of Int. Conf. on Computer Vision, pp. 426-432, 2001.
- [11] B. Stenger, P.R.S. Mendonca, and R. Cipolla, “Model based 3D Tracking of an Articulated Hand,” Proc. of Conf. on Computer Vision and Pattern Recognition, 2, pp. 310-315, 1997.
- [12] M. Bray, E. Koller-Meier, and L.V. Gool, “Smart Particle Filtering for 3D Hand Tracking,” Proc. of the Sixth IEEE Int. Conf. on Automatic Face and Gesture Recognition, 675, 2004.
- [13] A. Erol, G. Bebis, M. Nicolescu, R.D. Boyle, and X. Twombly, “A Review on Vision-based Full DOF Hand Motion Estimation,” Proc. of Conf. on Computer Vision and Pattern Recognition, 3, p. 75, 2005.
- [14] R. Szeliski, “Rapid Octree Construction from Image Sequences,” CVGIP: Image Understanding, 58-1, pp. 23-32, 1993.
- [15] T. Kurihara and M. Miyata, “Modeling Deformable Human Hands From Medical Images,” Proc. of the 2004 ACM SIGGRAPH/Eurographics Symp. on Computer Animation, pp. 355-363, 2004.
- [16] T. Rhee, J.P. Lewis, U. Neumann, and K. Nayak, “Soft-Tissue Deformation for In Vivo Volume Animation,” Proc. of 15th Pacific Conf. on Computer Graphics and Applications, pp. 435-438, 2007.
- [17] Y. Hattori, A. Nakazawa, and H. Takemura, “Refinement of the Shape Reconstructed by Visual Cone Intersection using Fitting the Standard Human Model,” IPSJ SIG Notes CVIM, 31, pp. 147-154, 2007.
This article is published under a Creative Commons Attribution-NoDerivatives 4.0 Internationa License.
Copyright© 2009 by Fuji Technology Press Ltd. and Japan Society of Mechanical Engineers. All right reserved.