single-rb.php

JRM Vol.20 No.2 pp. 213-220
doi: 10.20965/jrm.2008.p0213
(2008)

Paper:

Furniture Model Creation Through Direct Teaching to a Mobile Robot

Kimitoshi Yamazaki*, Takashi Tsubouchi**, and Masahiro Tomono***

*Grad. School of Science. and Inf. The Univ. of Tokyo ,7-3-1 Hongo, Bunkyo-ku, Tokyo, Japan

**Grad. School of Syst. and Inf. Eng. Univ. of Tsukuba, 1-1-1 Tennodai, Tsukuba, Iabaraki, Japan

***Dept. of Syst. Robotics, Toyo Univ., 2100 Kujirai, Kawagoe, Saitama, Japan

Received:
October 1, 2007
Accepted:
December 10, 2007
Published:
April 20, 2008
Keywords:
instructed motion model, direct teaching, mobile manipulator, furniture model, service robot
Abstract
In this paper, a modeling method to handle furniture is proposed. Real-life environments are crowded with objects such as drawers and cabinets that, while easily dealt with by people, present mobile robots with problems. While it is to be hoped that robots will assist in multiple daily tasks such as putting objects in into drawers, the major problems lies in providing robots with knowledge about the environment efficiently and, if possible, autonomously.
If mobile robots can handle these furniture autonomously, it is expected that multiple daily jobs, for example, storing a small object in a drawer, can be performed by the robots. However, it is a perplexing process to give several pieces of knowledge about the furniture to the robots manually. In our approach, by utilizing sensor data from a camera and a laser range finder which are combined with direct teaching, a handling model can be created not only how to handle the furniture but also an appearance and 3D shape. Experimental results show the effectiveness of our methods.
Cite this article as:
K. Yamazaki, T. Tsubouchi, and M. Tomono, “Furniture Model Creation Through Direct Teaching to a Mobile Robot,” J. Robot. Mechatron., Vol.20 No.2, pp. 213-220, 2008.
Data files:
References
  1. [1] K. Nagatani and S. Yuta, “Autonomous Mobile Robot Navigation Including Door Opening Behavior-System Integration of Mobile Manipulator to Adapt Real Environment,” Proc. of Int. Conf. on Field and Service Robotics, pp. 208-215, 1997.
  2. [2] J. Miura, Y. Shirai, and N. Shimada, “Development of a Personal Service Robot with User-Friendly Interfaces,” 4th Int. Conf. on Field and Service Robotics, pp. 293-298, 2003.
  3. [3] L. Petersson, P. Jensfelt, D. Tell, M. Strandberg, D. Kragic, and H. I. Christensen, “Systems Integration for Real-World Manipulation Tasks,” Proc. 2002 IEEE Int. Conf. Robotics and Automation, pp. 2500-2505, 2002.
  4. [4] N. Y. Chong and K. Tanie, “Object Directive Manipulation Through RFID,” Proc. Int. Conf. on Control, Automation, and Systems, pp. 22-25, 2003.
  5. [5] R. Katsuki, J. Ohta, T. Mizuta, T. Kito, T. Arai, T. Ueyama, and T. Nishiyama, “Design of Artificial Marks to Determine 3D Pose By Monocular Vision,” Proc. 2003 IEEE Int. Conf. Robotics and Automation, pp. 995-1000, 2003.
  6. [6] E. S. Neo, K. Maruyama, T. Sakaguchi, Y. Kawai, and K. Yokoi, “A Behavior Level Operation System for Humanoid Robots,” Proc. IEEE-RAS Int. Conf. on Humanoid Robots, pp. 327-332, 2006.
  7. [7] K. Okada, M. Kojima, Y. Sagawa, T. Ichino, K. Sato, and M. Inaba, “Vision based behavior verification system of humanoid robot for daily environment tasks,” 6th IEEE-RAS Int. Conf. on Humanoid Robots, pp. 7-12, 2006.
  8. [8] J. D. Todd, “Door Opening and Handle Manipulation by Automatic Guided Vehicles,” Computer. Aided Production Engineering. edited by V.C. Venkatesh and J.A.McGeough Elsevier, pp. 373-378, 1991.
  9. [9] T. Inamura, N. Kojo, and M. Inaba, “Situation Recognition and Behavior Induction based on Geometric Symbol Representation of Multimodal Sensorimotor Patterns,” IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 5147-5152, 2006.
  10. [10] H. Tominaga, J. Takamatsu, K. Ogawara, H. Kimura, and K. Ikeuchi, “Symbolic Representation of Trajectories for Skill Generation,” Proc. of Int. Conf. on Robotics and Automation, Vol.4, pp. 4077-4082, 2000.
  11. [11] K. Okada, T. Ogura, A. Haneda, J. Fujimoto, F. Gravot, and M. Inaba, “Humanoid Motion Generation System on HRP2-JSK for Daily Life Environment,” 2005 IEEE Int. Conf. on Mechatronics and Automation, pp. 1772-1777, 2005.
  12. [12] S. Nakaoka, A. Nakazawa, K. Yokoi, H. Hirukawa, and K. Ikeuchi, “Generating Whole Body Motions for a Biped Humanoid Robot from Captured Human Dances,” Proc. of IEEE Int. Conf. on Robotics and Automation, pp. 3905-3910, 2003.
  13. [13] H. Asada and Y. Asari, “The Direct teaching of tool manipulation skills via the impedance identification of human motion,” Proc. of IEEE Int. Conf. on Robotics and Automation, pp. 1269-1274, 1988.
  14. [14] http://www.hokuyo-aut.jp/products/urg/urg.htm
  15. [15] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. Journal of Computer Vision, Vol.60, No.2, pp. 91-110, 2004.
  16. [16] J. Shi and C. Tomasi, “Good Features to Track,” IEEE Conf. on Computer Vision and Pettern Recognition, pp. 593-600, 1994.
  17. [17] M. J. Swain and D. H. Ballard, “Color Indexing,” Int. Journal of Computer Vision, Vol.7, pp. 11-32, 1991.
  18. [18] K. Yamazaki, T. Tsubouchi, and M. Tomono, “Motion Plannning for a Mobile Manipulator Based on Joint Motions for Error Recovery,” IEEE Int. Conf. on Intelligent Robots and Systems, pp. 7-12, 2006.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Nov. 04, 2024