JRM Vol.26 No.2 pp. 151-157
doi: 10.20965/jrm.2014.p0151


Person Detection Method Based on Color Layout in Real World Robot Challenge 2013

Kenji Yamauchi, Naoki Akai, Ryutaro Unai,
Kazumichi Inoue, and Koichi Ozaki

Utsunomiya University, 7-1-2 Yoto, Utsunomiya-City, Tochigi 321-8585, Japan

December 5, 2013
January 29, 2014
April 20, 2014
Real World Robot Challenge, autonomous mobile robot, image processing, person detection, color layout
In Real World Robot Challenge 2013, a mission was added that had robots search for a person wearing clothes featuring unique colors. We focus on the layout of such clothes with the aim of detecting persons wearing them by applying color extraction. Color extraction is improved by preprocessing of a clipping image from the robot’s vision and possibly extracting colors worn by target persons stably in natural light. Persons are detected by simply evaluating the layout of target colors. Our robots were equipped with person detection for this challenge and have detected all targeted persons. This paper describes considerations about person detection performance based on pre- and postchallenge results.
Cite this article as:
K. Yamauchi, N. Akai, R. Unai, K. Inoue, and K. Ozaki, “Person Detection Method Based on Color Layout in Real World Robot Challenge 2013,” J. Robot. Mechatron., Vol.26 No.2, pp. 151-157, 2014.
Data files:
  1. [1] T. Mita et al., “Discriminative Feature Co-occurrence Selection for Object Detection,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.30, No.7, pp. 1257-1269, 2008.
  2. [2] S. Lazebnik et al., “Beyond Bags of Features: Spatial Pyramid Matching for Recognizing Natural Scene Categories,” IEEE Conf. on Computer Vision & Pattern Recognition, pp. 2169-2178, 2006.
  3. [3] K. Ozaki et al., “Development of Strawberry Picking Robot – Picking for High Quality Strawberries without a Touch of their Skins –,” JSME Conf. on Robotics and Mechatronics, 1A1-A21, 2010 (in Japanese).
  4. [4] K. Klasing et al., “A Clustering Method for Efficient Segmentation of 3D Laser Data,” IEEE Int. Conf. on Robotics and Automation, pp. 4043-4048, 2008.
  5. [5] A. Carballo et al., “Laser Reflection Intensity and Multi-Layered Laser Range Finders for People Detection,” IEEE Int. Symp. on Robot and Human Interactive Communication, pp. 379-384, 2010.
  6. [6] J. Wright et al., “Robust Face Recognition via Sparse Representation,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.31, No.2, pp. 210-227, 2009.
  7. [7] N. Dalal et al., “Histograms of Oriented Gradients for Human Detection,” IEEE Computer Society Conf. on Computer Vision and Pattern Recognition, Vol.1, pp. 886-893, 2005.
  8. [8] J. Han et al., “Enhanced Computer Vision with Microsoft Kinect Sensor: A Review,” IEEE Trans. on Cybernetics, Vol.43, No.5, pp. 1318-1334, 2013.
  9. [9] M. Bertozzi et al., “Pedestrian detection in infrared images,” Proc. of the IEEE Intelligent Vehicles Symposium, pp. 662-667, 2003.
  10. [10] C. Garcia et al., “Face detection using quantized skin color regions merging and wavelet packet analysis,” IEEE Trans. on Multimedia, Vol.1, No.3, pp. 264-277, 1999.
  11. [11] N. Akai et al., “Autonomous Navigation Based on Magnetic and Geometric Landmarks on Environmental Structure in Real World,” J. of Robotics and Mechatronics, Vol.26, No.2, 2014 (in press).
  12. [12]
    Supporting Online Materials:[a] CIE 1931 Chromaticity Diagram Map. dhouse/courses/404/demos/cie.html
    [Accessed January 20, 2014]

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 22, 2024