single-rb.php

JRM Vol.24 No.3 pp. 507-516
doi: 10.20965/jrm.2012.p0507
(2012)

Paper:

Image Information Added Map Making Interface for Compensating Image Resolution

Shinya Kawakami*, Tomohito Takubo**, Kenichi Ohara*,
Yasushi Mae*, and Tatsuo Arai*

*Osaka University, 1-3 Machikaneyama-cho, Toyonaka, Osaka 560-8531, Japan

**Osaka City University, 3-3-138 Sugimoto, Sumiyoshi-ku, Osaka 558-8585, Japan

Received:
October 3, 2011
Accepted:
April 18, 2012
Published:
June 20, 2012
Keywords:
mapping, slam, human interface, image information added map
Abstract
We propose an image information added map to create an intuitive interface to explore unknown environments using pictures. The proposed map contains a good picture for each mapped object. The shooting angle and position for the picture are defined by the required resolution of the image, the camera specifications and the object’s shape. The appearance from a desired direction can be confirmed intuitively by referring to the shooting vector for the object. To make the proposed map, high quality image information should be acquired on its definition. We developed a tool for making the map and tested its effectiveness in an experiment.
Cite this article as:
S. Kawakami, T. Takubo, K. Ohara, Y. Mae, and T. Arai, “Image Information Added Map Making Interface for Compensating Image Resolution,” J. Robot. Mechatron., Vol.24 No.3, pp. 507-516, 2012.
Data files:
References
  1. [1] C. Fruh and A. Zakhor, “3D Model Generation for Cities Using Aerial Photographs and Ground Level Laser Scans,” Computer Vision and Pattern Recognition, Vol.2, No.2, pp. 31-38, 2001.
  2. [2] O. Pizarro, R. Eustice, and H. Singh, “Large Area 3D Reconstructions from Underwater Surveys,” OCEANS ’04, MTTS/IEEE TECHNO-OCEAN ’04, Vol.2, pp. 678-687, 2004.
  3. [3] P. Biber, H. Andreasson, T. Duckett, and A. Schilling, “3D Modeling of Indoor Environments by a Mobile Robot with a Laser Scanner and Panoramic Camera,” Intelligent Robots and Systems, Vol.4, pp. 3430-3435, 2004.
  4. [4] R. Triebel, P. Pfaff, and W. Burgard, “Multi-Level Surface Maps for Outdoor Terrain Mapping and Loop Closing,” Intelligent Robots and Systems, pp. 2276-2282, 2006.
  5. [5] P. Newman, M. Chandran-Ramesh, D. Cole, M. Cummins, A. Harrison, I. Posner, and D. Schroeter, “Describing, Navigating and Recognising Urban Spaces – Building An End-to-End SLAM System,” Springer Tracts in Advanced Robotics, Vol.66, pp. 237-253, 2011.
  6. [6] R. Zask and M. Dailey, “Rapid 3D Visualization of Indoor Scenes Using 3D Occupancy Grid Isosurfaces,” Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology, Vol.2, pp. 672-675, 2009.
  7. [7] M. Blosch, S. Weiss, D. Scaramuzza, and R. Siegwart, “Vision Based MAV Navigation in Unknown and Unstructured Environments,” Robotics and Automation, pp. 21-28, 2010.
  8. [8] S. Kawakami, T. Takubo, K. Ohara, Y. Mae, and T. Arai, “Adding Image Information Corresponding to the Shape of the Objects’Surfaces on Environmental Maps,” Ubiquitous Robots and Ambient Intelligence, pp. 142-147, 2011.
  9. [9] K. Ohno, T. Nomura, and S. Tadokoro, “Real-Time Robot Trajectory Estimation and 3D Map Construction using 3D Camera,” Intelligent Robots and Systems, pp. 5279-5285, 2006.
  10. [10] K. Yoshida, K. Nagatani, and T. Matsuzawa, “Sensing Point Planning for 3-dimensional Scanning System to Obtain Outdoor Information,” JSME Conf. on Robotics and Mechatronics, 2P1-G21, 2008 (in Japanese).
  11. [11] K. Ohba, J. C. P. Ortega, K. Tanie, M. Tsuji, and S. Yamada, “Microscopic vision system with all-in-focus and depth images,” Machine Vision and Applications, Vol.15, pp. 55-62, 2003.
  12. [12] T. Takubo, T. Kaminade, Y. Mae, K. Ohara, and T. Arai, “NDT Scan Matching Method for High Resolution Grid Map,” Intelligent Robots and Systems, pp. 1517-1522, 2009.
  13. [13] T. Takubo, T. Arai, K. Inoue, H. Ochi, T. Konishi, T. Tsurutani, Y. Hayashibara, and E. Koyanagi, “Integrated Limb Mechanism Robot ASTERISK,” J. of Robotics and Mechatronics, Vol.18, No.2, pp. 203-214, 2006.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 22, 2024