single-jc.php

JACIII Vol.15 No.7 pp. 869-877
doi: 10.20965/jaciii.2011.p0869
(2011)

Paper:

Human Pointing Navigation Interface for Mobile Robot with Spherical Vision System

Yasutake Takahashi, Kyohei Yoshida, Fuminori Hibino,
and Yoichiro Maeda

Dept. of Human and Artificial Intelligent Systems, Graduate School of Engineering, University of Fukui, 3-9-1 Bunkyo, Fukui, Fukui 910-8507, Japan

Received:
March 5, 2011
Accepted:
May 9, 2011
Published:
September 20, 2011
Keywords:
user interface, mobile robot navigation, spherical vision system, pointing gesture
Abstract
Human-robot interaction requires intuitive interface that is not possible using devices, such as, the joystick or teaching pendant, which also require some trainings. Instruction by gesture is one example of an intuitive interfaces requiring no training, and pointing is one of the simplest gestures. We propose simple pointing recognition for a mobile robot having an upwarddirected camera system. The robot using this recognizes pointing and navigates through simple visual feedback control to where the user points. This paper explores the feasibility and utility of our proposal as shown by the results of a questionnaire on proposed and conventional interfaces.
Cite this article as:
Y. Takahashi, K. Yoshida, F. Hibino, and Y. Maeda, “Human Pointing Navigation Interface for Mobile Robot with Spherical Vision System,” J. Adv. Comput. Intell. Intell. Inform., Vol.15 No.7, pp. 869-877, 2011.
Data files:
References
  1. [1] “AIBO Official Site.”
    http://www.sony.jp/products/Consumer/aibo/
  2. [2] “Aldebaran Robotics, the creators of Nao – Aldebaran Robotics.”
    http://www.aldebaran-robotics.com/
  3. [3] “iRobot Corporation: Home Page.”
    http://www.irobot.com/
  4. [4] “CCP Co.,Limited – so-zi Robotic Vacuum Cleaner,” (in Japanese).
    http://www.ccp-jp.com/life/so-zi/
  5. [5] Y. Cui and J. J. Weng, “View-Based Hand Segmentation and Hand-Sequence Recognition with Complex Backgrounds,” In Proc. of the Int. Conf. on Pattern Recognition, pp. 617-621, 1996.
  6. [6] K. Grobel and H. Hienz, “Video-Based Handshape Recognition Using a Handshape Structure Model in Real Time,” In Proc. of the Int. Conf. on Pattern Recognition, pp. 446-450, 1996.
  7. [7] Y. Iwai, H. Shimizu, and M. Yachida, “Real-Time Context-Based Gesture Recognition Using HMM and Automaton,” In IEEE ICCV Workshop on Recognition, Analysis, & Tracking of Faces & Gestures in Real-Time Systems, pp. 127-134, Los Alamitos, CA, USA, IEEE Computer Society, 1999.
  8. [8] S. Nagaya, S. Seki, and R. Oka, “A Theoretical Consideration of Pattern Space Trajectory for Gesture Spotting Recognition,” In Proc. of the 2nd Int. Conf. on Automatic Face and Gesture Recognition, pp. 72-77, 1996.
  9. [9] T. Nishimura, H. Yabe, and R. Oka, “A method of model improvement for spotting recognition of gestures using an image sequence,” New Generation Computing, Vol.18, No.2, pp. 89-101, 2000.
  10. [10] N. Yoshiike and Y. Takefuji, “Object segmentation using maximum neural networks for the gesture recognition system,” Neurocomputing, Vol.51, pp. 213-224, 2003.
  11. [11] D. M. Gavrila and L. S. Davis, “3-D model-based tracking of humans in action: a multi-view approach,” In Proc. of IEEE Computer Vision and Pattern Recognition, pp. 73-80, 1996.
  12. [12] I. A. Kakadiaris and D. Metaxas, “3D Human Body Model Acquisition from Multiple Views,” In Int. J. of Computer Vision, pp. 618-623, 1995.
  13. [13] K. Sumi, K. Tanaka, and T. Matsuyama, “Measurement of Human Concentration with Multiple Cameras,” In R. Khosla, R. J. Howlett, and L. C. Jain (Eds.), Knowledge-Based Intelligent Information and Engineering Systems, Lecture Notes in Computer Science, Vol.3684, pp. 904-904, Springer Berlin/Heidelberg, 2005.
  14. [14] T. Fukuda, S. Ito, F. Arai, Y. Yokoyama, Y. Abe, K. Tanaka, and Y. Tanaka, “Navigation system based on ceiling landmark recognition for autonomous mobile robot-landmark detection based on fuzzy template matching (FTM),” In IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Vol.2, pp. 150-155, Los Alamitos, CA, USA, IEEE Computer Society, 1995.
  15. [15] W. Jeong and K. M. Lee, “CV-SLAM: a new ceiling vision-based SLAM technique,” In 2005 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 3195-3200, 2005.
  16. [16] Y. Nagai, M. Asada, and K. Hosoda, “Learning for joint attention helped by functional development,” Advanced Robotics, Vol.20, No.10, pp. 1165-1181, 2006.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 05, 2024