JRM Vol.21 No.6 pp. 689-697
doi: 10.20965/jrm.2009.p0689


Object Detection and Recognition Using Template Matching with SIFT Features Assisted by Invisible Floor Marks

Seiji Aoyagi*, Nobuhiko Hattori*, Atsushi Kohama*, Sho Komai*,
Masato Suzuki*, Masaharu Takano*, and Eiji Fukui**

*Faculty of Engineering, Kansai University
3-3-35 Yamate-cho, Suita, Osaka 564-8680, Japan

**OG Corporation 2-8-7 Nihonbashi-honcho, Chuo-ku, Tokyo 103-8417, Japan

June 6, 2009
October 26, 2009
December 20, 2009
SLAM, template matching using SIFT, invisible floor mark, partial template, spatial relationship to floor

For simultaneously localizing and mapping (SLAM) an indoor mobile robot, a method to process a monocular image of entire environmental view is proposed. To ensure that an object can be searched for, invisible floor marks are proposed for modifying the environment and which are useful in narrowing the search area in an image. Specifically our approach involves: 1) narrowing the searched area using invisible floor marks, 2) extracting features based on scale-invariant feature transform (SIFT), 3) using template matching with SIFT features assisted by partial templates and the spatial relationship to the floor, and 4) verifying object recognition with an AdaBoost classifier using Haar-like features based on object shape information. A robot is localized relative to the floor using the floor marks, then, objects in a clattered image are extracted and recognized, and 3D solid models of them are mapped on the floor to build a highly structured 3D map. Recognition was over 80% successful, including tables and chairs and taking several tens of seconds per 640 × 480 pixel image.

Cite this article as:
Seiji Aoyagi, Nobuhiko Hattori, Atsushi Kohama, Sho Komai,
Masato Suzuki, Masaharu Takano, and Eiji Fukui, “Object Detection and Recognition Using Template Matching with SIFT Features Assisted by Invisible Floor Marks,” J. Robot. Mechatron., Vol.21, No.6, pp. 689-697, 2009.
Data files:
  1. [1] S. Thrun, W. Burgard, and D. Fox, “Probabilistic Robotics,” Cambridge, MA New York, MIT Press, 2005. (Japanese version by R. Ueda, Mainichi-Communications, 2007).
  2. [2] A. Davison, “Real-time Simultaneous Localization and Mapping with a Single Camera,” in Proc. ICCV, pp. 1403-1410, Nice, France, 2003.
  3. [3] M. Tomono, “3-D Object Map Building Using Dense Object Models with SIFT-based Recognition Features,” in Proc. IROS 2006, pp. 1885-1890, Beijing, China, 2006.
  4. [4] R. Sim and J. J. Little, “Autonomous Vision-Based Exploration and Mapping Using Hybrid Maps and Rao-Blackwellised Particle Filters,” in Proc. IROS 2006, pp. 2082-2089, Beijing, China, 2006.
  5. [5] R. Kurazume, H. Yamada, K. Murakami, Y. Iwashita, and T. Hasegawa, “Target Tracking Using SIR and MCMC Particle Filters by Multiple Cameras and Laser Range Finders,” in Proc. IROS 2008, pp. 3838-3844, Nice, France, 2008.
  6. [6] S. Ikeda and J. Miura, “3D Indoor Environment Modeling by a Mobile Robot with Omnidirectional Stereo and Laser Range Finder,” in Proc. IROS 2006, pp. 3435-3440, Beijing, China, 2006.
  7. [7] “Special Issue on Visual SLAM,” J. Neira, A. J. Devison, and J. J. Leonard (Eds.), IEEE Trans. Robotics, Vol.24, No.5, pp. 929-1093, 2008.
  8. [8] D. G. Lowe, “Object Recognition from Local Scale-Invariant Features,” in Proc. ICCV, pp. 1150-1157, Kerkyra, Greece, 1999.
  9. [9] D. G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” Int. J. Computer Vision, Vol.60, No.2, pp. 91-110, 2004.
  10. [10] D. Meger, P. E. Forssen, K. Lai, S. Helmer, S. McCann, T. Southey, M. Baumann, J. J. Little, and D. G. Lowe, “Curious George: An Attentive Semantic Robot,” Robotics and Autonomous Systems, Vol.56, No.6, pp. 503-511, 2008.
  11. [11] H. Zender, O. Martinez, M. P. Jensfelt, G. J. M. Kruijiff, and W. Burgard, “Conceptual Spatial Representations for Indoor Mobile Robots,” Robotics and Autonomous Systems, Vol.56, No.6, pp. 493-502, 2008.
  12. [12] A. C. Murillo, J. Košecká, J. J. Guerrero, and C. C. Sagüés, “Visual Door Detection Integrating Appearance and Shape Cues,” Robotics and Autonomous Systems, Vol.56, No.6, pp. 512-521, 2008.
  13. [13] From Features to Actions -Unifying Perspectives in Computational and Robot Vision, Workshop at ICRA 2007, Roma, Italy.
  14. [14] From Sensors to Human Spatial Concepts, Workshop at IROS 2007, San Diego, USA.
  15. [15] S. Aoyagi, T. Yamaguchi, K. Tsunemine, H. Kinomoto, and M. Takano, “Development of a Mobile Home Robot System based on RECS Concept and Its Application to Setting and Clearing the Table,” J. Robotics and Mechatronics, Vol.19, No.6, 2007, pp. 646-655, 2007.
  16. [16] T. Joutou, H. Hoashi, and K. Yanai, “50-kind Food Image Recognition employing Multiple Kernel Learning,” in Proc. Meeting on Image Recognition and Understanding (MIRU), pp. 111-118, Matsue, Japan, 2009.
  17. [17] R. Suematsu and H. Yamada, “Image Processing Engineering,” CORONA PUBLISHING CO., LTD., pp. 133-136, 2000.
  18. [18] K. Okada, M. Kojima, Y. Sagawa, T. Ichino, K. Sato, and M. Inaba, “Vision Based Behavior Verification System of Humanoid Robot for Dailyenvironment Tasks,” in Proc. IEEE-RAS Int. Conf. on Humanoid Robots (Humanoids 2006), pp. 7-12, 2006.
  19. [19] H. Murase and S. K. Nayar, “3D Object Recognition from Appearance: Parametric Eigenspace Method,” The Trans. of the Institute of Electronics, Information and Communication Engineers, Vol.J77-D-2, No.11, pp. 2179-2187, 1994.
  20. [20] R. Hess,∼hess/index.html
  21. [21] B. Jeff and D. G. Lowe, “Shape in Indexing Using Approximate Nearest-neighbor Search in High-dimensional Space,” in Proc. Conf. Computer Vision and Pattern Recognition, pp. 1000-1006, Puerto Rico, 1997.
  22. [22] E. Frank, M. A. Hall, G. Holmes, R. Kirkby, and B. Pfahringer, WEKA [Software]. Available:
  23. [23] A. Stein and M. Hebert, “Incorporating Background Invariance into Feature-Based Object Recognition,” in Proc. the seventh IEEE work shop on Application of Computer Vision (WACV/MOTION’05), pp. 37-44, Washington, D. C., USA, 2005.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Mar. 05, 2021