single-rb.php

JRM Vol.26 No.2 pp. 185-195
doi: 10.20965/jrm.2014.p0185
(2014)

Paper:

Pre-Driving Needless System for Autonomous Mobile Robots Navigation in Real World Robot Challenge 2013

Masanobu Saito, Kentaro Kiuchi, Shogo Shimizu,
Takayuki Yokota, Yusuke Fujino, Takato Saito,
and Yoji Kuroda

Department of Mechanical Engineering, Meiji University, 1-1-1 Higashimita, Tama-ku, Kawasaki 214-8571, Japan

Received:
November 21, 2013
Accepted:
February 24, 2014
Published:
April 20, 2014
Keywords:
navigation, consistent localization, traversability, motion planning, human target detection
Abstract
This paper describes navigation systems for autonomous mobile robots taking part in the real-world Tsukuba Challenge 2013 robot competition. Tsukuba Challenge 2013 enables any information on the route to be collected beforehand and used on the day of the challenge. At the same time, however, autonomous mobile robots should function appropriately in daily human life even in areas where they have never been before. System thus need not capture pre-driving details. We analyzed traverses in complex urban areas without prior environmental information using light detection and ranging (LIDAR). We also determined robot status, such as its position and orientation using the gauss maps derived from LIDAR without gyro sensors. Dead reckoning was combined with wheel odometry and orientation from above. We corrected 2D robot poses by matching electronics maps from the Web. Because drift inevitably causes errors, slippage and failure, etc., our robot also traced waypoints derived beforehand from the same electronics map, so localization is consistent even if we do not drive through an area ahead of time. Trajectory candidates are generated along global planning routes based on these waypoints and an optimal trajectory is selected. Tsukuba Challenge 2013 required that robots find specified human targets indicated by features released on the Web. To find the target correctly without driving in Tsukuba beforehand, we searched for point cloud clusters similar to specified human targets based on predefined features. These point clouds were then projected on the camera image at the time, and we extracted points of interest such as SURF to apply fast appearance-based mapping (FAB-MAP). This enabled us to find specified targets highly accurately. To demonstrate the feasibility of our system, experiments were conducted over our university route and over that in the Tsukuba Challenge.
Cite this article as:
M. Saito, K. Kiuchi, S. Shimizu, T. Yokota, Y. Fujino, T. Saito, and Y. Kuroda, “Pre-Driving Needless System for Autonomous Mobile Robots Navigation in Real World Robot Challenge 2013,” J. Robot. Mechatron., Vol.26 No.2, pp. 185-195, 2014.
Data files:
References
  1. [1] S. Thrun, M. Montemerlo et al., “Stanley: The robot that won the DARPA Grand Challenge,” In The Proc. of the JFR, Vol.23, No.9, pp. 661-692, 2006.
  2. [2] C. Urmson, J. Anhalt et al., “Autonomous Driving in Urban Environments: Boss and the Urban Challenge,” In The Proc. of the JFR, Vol.25, No.8, pp. 425-466, 2008.
  3. [3] R. Triebel, P. Pfaff, and W. Burgard, “Multi-level surface maps for outdoor terrain mapping and loop closing,” In The Proc. of the IEEE/RSJ IROS, 2006.
  4. [4] S. Shimizu and Y. Kuroda, “High-Speed Registration of Point Clouds by using Dominant Planes,” In The Proc. of the 19th RSJ/JSME/SICE Robotics Symposia, 2014.
  5. [5] F. Neuhaus, D. Dillenberger, J. Pellenz, and D. Paulus, “Terrain drivability analysis in 3d laser range data for autonomous robot navigation in unstructured environments,” In The Proc. of the IEEE ETFA, 2009.
  6. [6] S. Koenig and M. Likhachev, “D*Lite,” In The Proc. of the AAAI, pp. 476-483, 2003.
  7. [7] D. Ferguson, T. M. Howard, and M. Likhachev, “Motion planning in urban environments,” In The Proc. of the JFR, Vol.25, pp. 939-960, 2008.
  8. [8] T.M. Howard and A. Kelly, “Optimal rough terrain trajectory generation for wheeled mobile robots,” In The Proc. of the IJRR, Vol.26, No.2, pp. 141-166, 2007.
  9. [9] T. Howard and C. Green, “State space sampling of feasible motions for high-performance mobile robot navigation in complex environments,” In The Proc. of the JFR, Vol.25, No.1, pp. 325-345, 2008.
  10. [10] C.-C. Chang and C.-J. Lin, “LIBSVM: a library for support vector machines,” ACM Trans. on Intelligent Systems and Technology, Vol.2 Issue 3, 2011.
  11. [11] M. Cummins and P. Newman, “FAB-MAP: Probabilistic localization and mapping in the space of appearance,” In The Proc. of the The Int. J. of Robotics Research, Vol.27, No.6, pp. 647-665, 2008.
  12. [12] J. Sivic and A. Zisserman, “Video Google:A text retrieval approach to object matching in videos,” In The Proc. of the IEEE ICCV, Vol.2, pp. 1470-1477, 2003.
  13. [13] H. Bay et al. “SURF: Speeded Up Robust Features,” In The Proc. of the ECCV, Vol.13, pp. 404-417, 2006.
  14. [14] D. Arthur and S. Vassilvitskii, “k-means++: The advantages of careful seeding,” In The Proc. of the SODA07, pp. 1027-1035, 2007.
  15. [15] C. Chow and C. Liu, “Approximating discrete probability distributions with dependence trees,” In The Proc. of the IEEE TIT, Vol.14, No.3, pp. 462-467, 1968.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 05, 2024