single-rb.php

JRM Vol.21 No.4 pp. 478-488
doi: 10.20965/jrm.2009.p0478
(2009)

Paper:

Autonomous Motion Generation Based on Reliable Predictability

Shun Nishide*, Tetsuya Ogata*, Jun Tani**, Kazunori Komatani*, and Hiroshi G. Okuno*

*Graduate School of Informatics, Kyoto University
Engineering Building #10, Yoshida-honmachi, Sakyo-ku, Kyoto 606-8501, Japan

**Brain Science Institute, RIKEN
2-1 Hirosawa, Wako City, Saitama 351-0198, Japan

Received:
December 1, 2008
Accepted:
May 29, 2009
Published:
August 20, 2009
Keywords:
neurorobotics, neural networks, humanoid robots
Abstract
Predictability is an important factor for generating object manipulation motions. In this paper, the authors present a technique to generate autonomous object pushing motions based on object dynamics consistency, which is tightly connected to reliable predictability. The technique first creates an internal model of the robot and object dynamics using Recurrent Neural Network with Parametric Bias, based on transitions of extracted object features and generated robot motions acquired during active sensing experiences with objects. Next, the technique searches through the model for the most consistent object dynamics and corresponding robot motion through a consistency evaluation function using Steepest Descent Method. Finally, the initial static image of the object is linked to the acquired robot motion using a hierarchical neural network. The authors have conducted a motion generation experiment using pushing motions with cylindrical objects for evaluation of the method. The experiment has shown that the method has generalized its ability to adapt to object postures for generating consistent rolling motions.
Cite this article as:
S. Nishide, T. Ogata, J. Tani, K. Komatani, and H. Okuno, “Autonomous Motion Generation Based on Reliable Predictability,” J. Robot. Mechatron., Vol.21 No.4, pp. 478-488, 2009.
Data files:
References
  1. [1] M. Brady, “Artificial Intelligence and Robotics,” Artificial Intelligence, Vol.26, pp. 79-121, 1985.
  2. [2] R. Bajcsy, “Active Perception,” IEEE Proc., Special issue on Computer Vision, Vol.76, No.8, pp. 996-1005, 1988.
  3. [3] J. J. Gibson, “The Ecological Approach to Visual Perception,” Houghton Mifflin, ISBN: 0898599598, 1979.
  4. [4] T. Ogata, H. Ohba, J. Tani, K. Komatani, and H. G. Okuno, “Extracting Multi-Modal Dynamics of Objects using RNNPB,” Journal of Robotics and Mechatronics, Special Issue on Human Modeling in Robotics, Vol.17, No.6, pp. 681-688, 2005.
  5. [5] S. Takamuku, Y. Takahashi, and M. Asada, “Lexicon acquisition based on object-oriented behavior learning,” Advanced Robotics, Vol.20, No.10, pp. 1127-1145, 2006.
  6. [6] P. Fitzpatrick, G. Metta, L. Natale, S. Rao, and G. Sandini, “Learning About Objects Through Action — Initial Steps Towards Artificial Cognition,” Proc. of IEEE Int. Conf. on Robotics and Automation, pp. 3140-3145, 2003.
  7. [7] A. Stoytchev, “Learning the Affordances of Tools Using a Behavior-Grounded Approach,” Springer Lecture Notes in Artificial Intelligence, pp. 140-158, 2008.
  8. [8] S. Nishide, T. Ogata, J. Tani, K. Komatani, and H. G. Okuno, “Predicting Object Dynamics from Visual Images through Active Sensing Experiences,” Advanced Robotics, Vol.22, No.5, pp. 527-546, 2008.
  9. [9] J. Hawkins and S. Blakeslee, “On Intelligence,” Times Books, ISBN: 0805078533, 2004.
  10. [10] J. Tani and M. Ito, “Self-Organization of Behavioral Primitives as Multiple Attractor Dynamics,” IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans, Vol.33, No.4, pp. 481-488, 2003.
  11. [11] M. Jordan, “Attractor dynamics and parallelism in a connectionist sequential machine,” Eighth Annual Conf. of the Cognitive Science Society, pp. 513-546, 1986.
  12. [12] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning Internal Representations by Error Propagation,” in D. Rumelhart and F. McClelland (Ed.), “Parallel Distributed Processing,” M.I.T. Press, Vol.1, pp. 318-362, 1986.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Oct. 01, 2024