single-jc.php

JACIII Vol.21 No.4 pp. 709-715
doi: 10.20965/jaciii.2017.p0709
(2017)

Paper:

Movement Operation Interaction System for Mobility Robot Using Finger-Pointing Recognition

Eichi Tamura, Yoshihiro Yamashita, Taisei Yamashita, Eri Sato-Shimokawara, and Toru Yamaguchi

Tokyo Metropolitan University
6-6 Asahigaoka, Hino, Tokyo 191-0065, Japan

Received:
January 1, 1970
Accepted:
May 18, 2017
Published:
July 20, 2017
Keywords:
gesture recognition, finger-pointing gesture, mobility robot
Abstract

Finger pointing is an intuitive method for people to direct a robot to move to a certain location. We propose a system that enables the movement operation of a mobility robot by using finger-pointing gestures for an automatic and intuitive driving experience. We employ a method to recognize gestures by using video images from a USB camera mounted on a wearable device. Our method does not require the use of infrared sensors. Three movement commands for forward motion, turning, and stopping are chosen based on gesture recognition, face orientation detection, and an intelligent safety system. We experimentally demonstrate the usefulness of the system using a scooter-type mobility robot.

Cite this article as:
E. Tamura, Y. Yamashita, T. Yamashita, E. Sato-Shimokawara, and T. Yamaguchi, “Movement Operation Interaction System for Mobility Robot Using Finger-Pointing Recognition,” J. Adv. Comput. Intell. Intell. Inform., Vol.21 No.4, pp. 709-715, 2017.
Data files:
References
  1. [1] P. A. Rauschnabel, A. Brem, and B. S. Ivens, “Who will buy smart glasses? Empirical results of two pre-market-entry studies on the role of personality in individual awareness and intended adoption of Google Glass wearables,” Computers in Human Behavior, Vol.49, pp. 635-647, 2015.
  2. [2] G. Kapellmann-Zafra, J. Chen, and R. Groß, “Using Google Glass in Human–Robot Swarm Interaction,” Conf. Towards Autonomous Robotic Systems, pp. 196-201, 2016.
  3. [3] S. Le Vine, A. Zolfaghari, and J. Polak, “Autonomous cars: The tension between occupant experience and intersection capacity,” Transportation Research Part C: Emerging Technologies, Vol.52, pp. 1-14, 2015.
  4. [4] L. Kitagawa, T. Kobayashi, T. Beppu, and K. Terashima, “Semi-autonomous obstacle avoidance of omnidirectional wheelchair by joystick impedance control,” Intelligent Robots and Systems, Proc. 2001 IEEE/RSJ Int. Conf. on, Vol. 4, pp. 2148-2153, 2001.
  5. [5] S. A. Seshia, D. Sadigh, and S. S. Sastry, “Formal methods for semi-autonomous driving,” In Proc. of the 52nd Annual Design Automation Conf., p. 148, 2015.
  6. [6] T. Shibata, Y. Matsumoto, M. Inaba, and H. Inoue, “On-the-spot Rider-directed Action Instruction with the Personal Vision-based Mobile Robot “Hyper Scooter”,” J. of the Robotics Society of Japan, Vol.14, No.8, pp. 1138-1144, 1996.
  7. [7] J. Norberto Pires, “Robot-by-voice: Experiments on commanding an industrial robot using the human voice,” Industrial Robot: An Int. J., Vol.32, No.6, pp. 505-511, 2005.
  8. [8] X. Lv, M. Zhang, and H. Li, “Robot control based on voice command,” In Automation and Logistics, ICAL 2008, IEEE Int. Conf. on, pp. 2490-2494, 2008.
  9. [9] K. T. Ulrich, “Estimating the technology frontier for personal electric vehicles,” Transportation research part C: Emerging technologies, Vol.13, No.5, pp. 448-462, 2005.
  10. [10] C. Nakagawa, Y. Suda, K. Nakano, and K. Nabeshima, “Proposal for Personal Mobility Vehicle,” Bimonthly J. of Institute of Industrial Science, the University of Tokyo, Vol.61, No.1, pp. 71-74, 2009.
  11. [11] Z. Ren, J. Yuan, J. Meng, and Z. Zhang, “Robust part-based hand gesture recognition using kinect sensor,” IEEE Trans. on multimedia, Vol.15, No.5, pp. 1110-1120, 2013.
  12. [12] B. K. Horn and B. G. Schunck, “Determining optical flow,” Artificial intelligence, Vol.17, No.1-3, pp. 185-203, 1981.
  13. [13] E. Sato, A. Nakajima, and T. Yamaguchi, “Nonverbal interface for user-friendly manipulation based on natural motion,” In 2005 Int. Symp. on Computational Intelligence in Robotics and Automation, pp. 499-504, 2005.
  14. [14] C. Stauffer and W. E. L. Grimson, “Adaptive background mixture models for real-time tracking,” In Computer Vision and Pattern Recognition, IEEE Computer Society Conf. on., Vol. 2, 1999.
  15. [15] H. Tezuka and T. Nishitani, “Multiresolutional Gaussian Mixture Model for Precise and Stable Foreground Segmentation in Transform Domain,” IEICE Trans. on fundamentals of electronics, communications and computer sciences, Vol.92, No.3, pp. 772-778, 2009.
  16. [16] S. Matsui, Y. Yamashita, T. Yamaguchi, and T. Nishitani, “Robust finger motion interface for IT terminals based on GMM foreground segmentation,” SI2013, 1K4-3, 2013.
  17. [17] E. Tamura, Y. Yamashita, Y. Ho, E. Sato-Shimokawara, T. Nishitani, and T. Yamaguchi, “Wearable finger motion input interface system with GMM foreground segmentation,” In Technologies and Applications of Artificial Intelligence (TAAI), 2015 Conf. on, pp. 213-220, 2015.
  18. [18] E. Tamura, Y. Yamashita, Y. Ho, E. Sato-Shimokawara, and T. Yamaguchi, “Robot Control Interface System Using Glasses-Type Wearable Devices,” In Int. Conf. on Intelligent Robotics and Applications, pp. 247-256, 2016.
  19. [19] T. Yamashita, E. Tamura, Y. Kokubo, E. Sato-Shimokawara, and T. Yamaguchi, “Recognition of Face Direction using image processing for a hand gesture input interface system which aims to safety,” SI2016, 2C1-7, 2016.
  20. [20] Y. Kokubo, Y. Yamaguchi, E. Sato-Shimokawara, and T. Yamaguchi, “Influence of approaching patterns of Telepresence Robot for personal space,” In 2015 Conf. on Technologies and Applications of Artificial Intelligence (TAAI), pp. 221-226, 2015.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 18, 2024