single-rb.php

JRM Vol.23 No.6 pp. 1012-1023
doi: 10.20965/jrm.2011.p1012
(2011)

Paper:

Obstacle Location Classification and Self-Localization by Using a Mobile Omnidirectional Camera Based on Tracked Floor Boundary Points and Tracked Scale-Rotation Invariant Feature Points

Tsuyoshi Tasaki, Seiji Tokura, Takafumi Sonoura,
Fumio Ozaki, and Nobuto Matsuhira

Toshiba Research & Development Center, Toshiba Corporation, 1 Komukai-Toshiba-cho, Saiwai-ku, Kawasaki 212-8582, Japan

Received:
April 18, 2011
Accepted:
August 2, 2011
Published:
December 20, 2011
Keywords:
omnidirectional camera, obstacle classification, self-localization, mobile robot
Abstract
For a mobile robot self-localization and knowledge of the location of all obstacles around it is essential. Moreover, classification of the obstacles as stable or unstable and fast self-localization using a single sensor such as an omnidirectional camera are also important to achieve smooth movements and to reduce the cost of the robot. However, there are few studies on locating and classifying all obstacles around the robot and localizing its self-position fast during its motion by using only one omnidirectional camera. In order to locate obstacles and localize the robot, we have developed a new method that uses two kinds of points that can be detected and tracked fast even in omnidirectional images. In the obstacle location and classification process, we use floor boundary points where the distance from the robot can be measured using an omnidirectional camera. By tracking those points, we can classify obstacles by comparing the movement of each tracked point with odometry data. Our method changes a threshold to detect the points based on the result of this comparison in order to enhance classification. In the self-localization process, we use tracked scale and rotation invariant feature points as new landmarks that are detected for a long time by using both a fast tracking method and a slow Speed Up Robust Features (SURF) method. Once landmarks are detected, they can be tracked fast. Therefore, we can achieve fast self-localization. The classification ratio of our method is 85.0%, which is four times higher than that of a previous method. Our robot can localize 2.9 times faster and 4.2 times more accurately by using our method, in comparison to the use of the SURF method alone.
Cite this article as:
T. Tasaki, S. Tokura, T. Sonoura, F. Ozaki, and N. Matsuhira, “Obstacle Location Classification and Self-Localization by Using a Mobile Omnidirectional Camera Based on Tracked Floor Boundary Points and Tracked Scale-Rotation Invariant Feature Points,” J. Robot. Mechatron., Vol.23 No.6, pp. 1012-1023, 2011.
Data files:
References
  1. [1] Z. Jia, A. Balasuriya, and S. Challa, “Sensor Fusion based 3D Target Visual Tracing for Autono-mous Vehicles with IMM,” Int. Conf. on Robotics and Automation, pp. 1841-1846, 2005.
  2. [2] M. Weser, D. Westhoff, M. Hiiser, and J. Zhang, “Multimodal People Tracking and Trajectory Prediction based on Learned Generalized Motion Patterns,” Int. Conf. on Multisensor Fusion and Integration for Intel-ligent Systems, pp. 541-546, 2006.
  3. [3] Z. Chen and S. T. Birchfield, “Person Following with a Mobile Robot Using Binocular Feature-Based Tracking,” Int. Conf. on Intelligent Robots and Systems, pp. 815-820, 2007.
  4. [4] K. Yamazawa, Y. Yagi, and M. Yachida, “HyperOmni vision: Visual navigation with an omnidirecional image sensor,” Systems and Computers in Japan, Vol.28, No.4, pp. 36-47, 1997.
  5. [5] Y. Yagi, H. Nagai, K. Yamazawa, and M. Yachida, “Reactive Visual Navigation based on Omnidirectional Sensing - Path Following and Collision Avoidance -,” Int. Conf. on Intelligent Robots and Systems, pp. 58-63, 1999.
  6. [6] G. Silveira, E. Malis, and P. Rives, “Real-time Robust Detection of Planar Regions in a Pair of Images,” Int. Conf. on Intelligent Robots and Systems, pp. 49-54, 2006.
  7. [7] Y. Pang, Q. Huang, W. Zhang, Z. Hu, A. H. Rajpar, and K. Li, “Real-time Object Tracking of a Robot Head Based on Multiple Visual Cues Integration,” Int. Conf. on Intelligent Robots and Systems, pp. 686-691, 2006.
  8. [8] B. Jung and G. S. Sukhatme, “Detecting Moving Objects using a Single Camera on a Mobile Robot in an Outdoor Environement,” Conf. on Intelligent Autonomous Systems, pp. 980-987, 2004.
  9. [9] G. Chivilo, F.Mezzaro, A. Sgorbissa, and R. Zaccaria, “Follow-the-Leader Behavior through Optical Flow Minimization,” Int. Conf. on Intelligent Robots and Systems, pp. 3182-3187, 2004.
  10. [10] A. C. Murillo, J. J. Guerrero, and C. Sagues, “SURF features for efficient robot localization with omnidirectional images,” Int. Conf. on Robotics and Automation, pp. 3901-3907, 2007.
  11. [11] S. Ahn,W. K. Chung, and S. R. Oh, “Construction of Hybrid Visual Map for Indoor SLAM,” IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 1695-1701, 2007.
  12. [12] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. J. of Computer Vision, Vol.60, No.2, pp. 91-110, 2004.
  13. [13] H. Bay, T. Tuytelaars, and L. V. Gool, “Surf: Speed up robust features,” the ninth European Confernce on Computer Vision 2006.
  14. [14] H. Takeshima, T. Ida, and T. Kaneko, “Extracting Object Regions Using Locally Es-timated Probability Density Functions,” Conf. on Machine Vision Applications, 2007.
  15. [15] S. M. Bopalkar, P. Talwai, and B. H. Parmar, “Body Parts Detection in Gesture Recognition using Color Information,” Int. Conf. and Workshop on Emerging Trends in Technology, pp. 149-152, 2011.
  16. [16] R. Hassanpour, A. Shahbahrami, and S. Wong, “Adaptive Gaussian Mixture Model for Sking Color Segmentation,” World Academy of Science, pp. 1-6, 2008.
  17. [17] J. H.Ward, “Hierarchical Grouping to Optimize an Objective Function,” J. of the American Statistical Association, Vol.58, No.301, pp. 236-244, 1963.
  18. [18] Y. Negishi, J. Miura, and Y. Shirai, “Calibration of Omnidirectional Stereo forMobile Robots,” Int. Conf. on Intelligent Robots and Systems, pp. 2600-2605, 2004.
  19. [19] N. Mitsunaga, T. Miyashita, H. Ishiguro, K. Kogure, and N. Hagita, “Robovie-IV: A Communication Robot Interacting with People Daily in an Office,” Int. Conf. on Intelligent Robots and Systems, pp. 5066-5072, 2006.
  20. [20] T. Kanda and H. Ishiguro, “Friendship estimation model for social robots to understand human relationships,” Int. Workshop on Robot and Human Communication, pp. 539-544, 2004.
  21. [21] B. D. Lucas and T. Kanade, “An Iterative Image Registration Technique with an Application to Stereo Vision,” Int. Joint Conf. on Artificial Intelligence, pp. 674-679, 1981
  22. [22] J. Y. Bouguet, “Pyramidial Implementation of the Lucas Kanade Feature Tracker,” OpenCV Documentation, Intel Corporation, Microprocessor Research Labs, 1999.
  23. [23] R. Kawanishi, A. Yamashita, and T. Kaneko, “Estimation of Camera Motion with Feature Flow Model for 3D Environment Modeling by Using Omni-Directional Camera,” Int. Conf. on Intelligent Robots and Systems, pp. 3089-3094, 2009.
  24. [24] C. Cortes and V. Vapnik, “Support-Vector Networks,” Machine Learning, Vol.20, pp. 273-297, 1995.
  25. [25] N. Ando, T. Suehiro, K. Kitagaki, T. Kotoku, and W. K. Yoon, “RT-Middleware: Distributed Component Middleware for RT (Robot Technology),” Int. Conf. on Intelligent Robots and Systems, pp. 3555-3560, 2005.
  26. [26] M. Piaggio, R. Formaro, A. Piombo, L. Sanna, and R. Zaccaria, “An Optical-Flow Person Following Behavior,” IEEE ISIC/CIRNISAS Joint Conf., pp. 4078-4083, 1998.
  27. [27] S. Oyama, T. Kokubo, and T. Ishida, “Domain Specific Search with Keyword Spices,” IEEE Trans. on Knowledge and Data Engineering, Vol.16, No.1, pp. 17-27, 2004.
  28. [28] H. Nabeshima, R. Miyagawa, Y. Suzuki, and K. Iwanuma, “Rapid Synthesis of Domain-Specific Web Search Engines Based on Semi-Automatic Training-Example Generation,” Int. Conf. on Web Intelligence, pp. 769-772, 2006.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 22, 2024