Paper:
Human Pointing Navigation Interface for Mobile Robot with Spherical Vision System
Yasutake Takahashi, Kyohei Yoshida, Fuminori Hibino,
and Yoichiro Maeda
Dept. of Human and Artificial Intelligent Systems, Graduate School of Engineering, University of Fukui, 3-9-1 Bunkyo, Fukui, Fukui 910-8507, Japan
- [1] “AIBO Official Site.”
http://www.sony.jp/products/Consumer/aibo/ - [2] “Aldebaran Robotics, the creators of Nao – Aldebaran Robotics.”
http://www.aldebaran-robotics.com/ - [3] “iRobot Corporation: Home Page.”
http://www.irobot.com/ - [4] “CCP Co.,Limited – so-zi Robotic Vacuum Cleaner,” (in Japanese).
http://www.ccp-jp.com/life/so-zi/ - [5] Y. Cui and J. J. Weng, “View-Based Hand Segmentation and Hand-Sequence Recognition with Complex Backgrounds,” In Proc. of the Int. Conf. on Pattern Recognition, pp. 617-621, 1996.
- [6] K. Grobel and H. Hienz, “Video-Based Handshape Recognition Using a Handshape Structure Model in Real Time,” In Proc. of the Int. Conf. on Pattern Recognition, pp. 446-450, 1996.
- [7] Y. Iwai, H. Shimizu, and M. Yachida, “Real-Time Context-Based Gesture Recognition Using HMM and Automaton,” In IEEE ICCV Workshop on Recognition, Analysis, & Tracking of Faces & Gestures in Real-Time Systems, pp. 127-134, Los Alamitos, CA, USA, IEEE Computer Society, 1999.
- [8] S. Nagaya, S. Seki, and R. Oka, “A Theoretical Consideration of Pattern Space Trajectory for Gesture Spotting Recognition,” In Proc. of the 2nd Int. Conf. on Automatic Face and Gesture Recognition, pp. 72-77, 1996.
- [9] T. Nishimura, H. Yabe, and R. Oka, “A method of model improvement for spotting recognition of gestures using an image sequence,” New Generation Computing, Vol.18, No.2, pp. 89-101, 2000.
- [10] N. Yoshiike and Y. Takefuji, “Object segmentation using maximum neural networks for the gesture recognition system,” Neurocomputing, Vol.51, pp. 213-224, 2003.
- [11] D. M. Gavrila and L. S. Davis, “3-D model-based tracking of humans in action: a multi-view approach,” In Proc. of IEEE Computer Vision and Pattern Recognition, pp. 73-80, 1996.
- [12] I. A. Kakadiaris and D. Metaxas, “3D Human Body Model Acquisition from Multiple Views,” In Int. J. of Computer Vision, pp. 618-623, 1995.
- [13] K. Sumi, K. Tanaka, and T. Matsuyama, “Measurement of Human Concentration with Multiple Cameras,” In R. Khosla, R. J. Howlett, and L. C. Jain (Eds.), Knowledge-Based Intelligent Information and Engineering Systems, Lecture Notes in Computer Science, Vol.3684, pp. 904-904, Springer Berlin/Heidelberg, 2005.
- [14] T. Fukuda, S. Ito, F. Arai, Y. Yokoyama, Y. Abe, K. Tanaka, and Y. Tanaka, “Navigation system based on ceiling landmark recognition for autonomous mobile robot-landmark detection based on fuzzy template matching (FTM),” In IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Vol.2, pp. 150-155, Los Alamitos, CA, USA, IEEE Computer Society, 1995.
- [15] W. Jeong and K. M. Lee, “CV-SLAM: a new ceiling vision-based SLAM technique,” In 2005 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 3195-3200, 2005.
- [16] Y. Nagai, M. Asada, and K. Hosoda, “Learning for joint attention helped by functional development,” Advanced Robotics, Vol.20, No.10, pp. 1165-1181, 2006.
This article is published under a Creative Commons Attribution-NoDerivatives 4.0 Internationa License.