single-rb.php

JRM Vol.21 No.1 pp. 28-35
doi: 10.20965/jrm.2009.p0028
(2009)

Paper:

Human Recognition Using RFID Technology and Stereo Vision

Songmin Jia, Jinbuo Sheng, Daisuke Chugo,
and Kunikatsu Takase

University of Electro-Communications, 1-5-1 Chofugaoka, Chofu-City, Tokyo 182-8585, Japan

Received:
November 1, 2007
Accepted:
June 25, 2008
Published:
February 20, 2009
Keywords:
RFID, stereo vision, probability, mobile robot, human detection
Abstract

In this paper, a method of human recognition in indoor environment for mobile robot using RFID (Radio Frequency Identification) technology and stereo vision is proposed as it is inexpensive, flexible and easy to use in practical environment. Because information of human being can be written in ID tags, the proposed method can detect the human easily and quickly compared with the other methods. The proposed method first calculates the probability where human with ID tag exists using Bayes rule and determines the ROI for stereo camera processing in order to get accurate position and orientation of human. Hu moment invariants was introduced to recognize the human being because this method is insensitive to the variations in position, size and orientation. The proposed method does not need to process all image and easily gets some information of obstacle such as size, color, thus decreases the processing computation. This paper introduces the architecture of the proposed method and presents some experimental results.

Cite this article as:
Songmin Jia, Jinbuo Sheng, Daisuke Chugo, and
and Kunikatsu Takase, “Human Recognition Using RFID Technology and Stereo Vision,” J. Robot. Mechatron., Vol.21, No.1, pp. 28-35, 2009.
Data files:
References
  1. [1] S. Ikeda and J. Miura, “3D Indoor Environment Modeling by a Mobile Robot with Omnidirectional Stereo and Laser Range Finder,” Proc. 2006 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 3435-3440, Beijing, China, Oct. 2006.
  2. [2] H. Koyasu, J. Miura, and Y. Shirai, “Realtime Omnidirectional Stereo Obstacle Detection in Dynamic Environment,” Proc. 2001 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 31-36, 2001.
  3. [3] D. Castro, U. Nunes, and A. Ruano, “Obstacle Avoidance in Local Navigation,” Proc. of the 10th Mediteranean Conf. on Control and Automation-MED2002 Lisbon, Portugal, July 9-12. 2002.
  4. [4] M. Watanabe, N. Takeda, and K. Onoguchi, “Moving Obstacle Recognition by Optical Flow Pattern Analysis for Mobile Robots,” Advanced Robotics, Vol.12, No.8, pp. 791-816, 1999.
  5. [5] C. Papageorgiou and T. Poggio, “A trainable system for object detection,” Int. Journal of Computer Vision (IJCV), 38(1), pp. 15-33, 2000.
  6. [6] P. Felzenszwalb and D. Huttenlocher, “Pictiorial structure for object recognition,” Int. Journal of Computer Vision (IJCV), 61(1), pp. 55-79, 2005.
  7. [7] W. Lin, S. Jia, F. Yang, and K. Takase, “Topological Navigation of Mobile Robot Using ID Tag and WEB Camera,” Proc. of Int. Conf. on Intelligent Mechatronics and Automation, pp. 644-649, 2004.
  8. [8] E. Shang, S. Jia, T. Abe, and K. Takase, “Research on Obstacle Detection with RFID Technology,” The 23st Annual Conf. of The Robotic Society of Japan, 1B34, 2005.
  9. [9] M. Hu, “Visual Pattern Recognition by Moment Invariants,” IRE Trans. Information Theory, Vol.8, No.2, pp. 179-187, 1962.
  10. [10] S. Jia, Y. Hada, and K. Takase, “Distributed Telerobotics System Based on Common Object Request Broker Architecture,” The Int. Journal of Intelligent and Robotic Systems, No.39, pp. 89-103, 2004.
  11. [11] S. Jia, W. Lin, K. Wang, T. Abe, E. Shang, and K. Takase, “Improvements in Developed Human-Assisting Robotic System,” Proc. of Int. Conf. on Intelligent Mechatronics and Automation, Invited paper, pp. 511-516, 2004.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on May. 17, 2021