single-jc.php

JACIII Vol.19 No.4 pp. 532-543
doi: 10.20965/jaciii.2015.p0532
(2015)

Paper:

Motion Segmentation and Recognition for Imitation Learning and Influence of Bias for Learning Walking Motion of Humanoid Robot Based on Human Demonstrated Motion

Yasutake Takahashi*, Hiroki Hatano*, Yosuke Maida**, Kazuyuki Usui**, and Yoichiro Maeda***

*Department of Human and Artificial Intelligent Systems, Graduate School of Engineering, University of Fukui
3-9-1 Bunkyo, Fukui, Fukui 910-8507, Japan

**Department of Human and Artificial Intelligent Systems, Faculty of Engineering, University of Fukui
3-9-1 Bunkyo, Fukui, Fukui 910-8507, Japan

***Department of Robotics, Faculty of Engineering, Osaka Institute of Technology
5-16-1 Omiya, Asahi-ku, Osaka 535-8585, Japan

Received:
April 24, 2014
Accepted:
May 27, 2015
Published:
July 20, 2015
Keywords:
motion segmentation and recognition, imitation learning, learning bias, humanoid robot, via-point representation
Abstract
Two main issues arise in practical imitation learning by humanoid robots observing human behavior – the first is segmenting and recognizing motion demonstrated naturally by a human beings and the second is utilizing the demonstrated motion for imitation learning. Specifically, the first involves motion segmentation and recognition based on the humanoid robot motion repertoire for imitation learning and the second introduces learning bias based on demonstrated motion in the humanoid robot’s imitation learning to walk. We show the validity of our motion segmentation and recognition in a practical way and report the results of our investigation in the influence of learning bias in humanoid robot simulations.
Cite this article as:
Y. Takahashi, H. Hatano, Y. Maida, K. Usui, and Y. Maeda, “Motion Segmentation and Recognition for Imitation Learning and Influence of Bias for Learning Walking Motion of Humanoid Robot Based on Human Demonstrated Motion,” J. Adv. Comput. Intell. Intell. Inform., Vol.19 No.4, pp. 532-543, 2015.
Data files:
References
  1. [1] H. Miyamoto and M. Kawato, “A tennis serve and upswing learning robot based on bi-directional theory,” Neural Networks, Vol.11, No.78, pp. 1331-1344, 1998.
  2. [2] T. Inamura, Y. Nakamura, and I. Toshima, “Embodied symbol emergence based on mimesis theory,” Int. J. of Robotics Research, Vol.23, No.4, pp. 363-377, 2004.
  3. [3] M. K. Y. Okuzawa, S. Kato and H. Itoh, “Acquisition and modification of motion knowledge using continuous hmms for motion imitation of humanoids,” IEEE Int. Symp. on Micro-Nano Mechatronics and Human Science, pp.586-591, 2009.
  4. [4] Y. Okuzawa, S. Kato, M. Kanoh, and H. Itoh, “Motion recognition and modifying motion generation for imitation robot based on motion knowledge formation,” The Trans. of the Institute of Electrical Engineers of Japan: C, A Publication of Electronics, Information and System Society, Vol.131, pp. 655-663, March 2011 (in Japanese).
  5. [5] Y. Okuzawa, S. Kato, M. Kanoh, and H. Itoh, “Motion recognition and modifying motion generation for imitation robot based on motion knowledge formation,” IEEJ Trans. on Electronics, Information and Systems, Vol.131, No.3, pp. 655-663, 2011.
  6. [6] S. Whitehead, J. Karlsson, and J. Tenenberg, “Learning multiple goal behavior via task decomposition and dynamic policy merging,” in ROBOT LEARNING (J. H. Connell and S. Mahadevan, eds.), Ch.3, pp. 45-78, Kluwer Academic Publishers, 1993.
  7. [7] M. Asada, S. Noda, S. Tawaratumida, and K. Hosoda, “Purposive behavior acquisition for a real robot by vision-based reinforcement learning,” Machine Learning, Vol.23, pp. 279-303, 1996.
  8. [8] S. Schaal, A. Ijspeert, and A. Billard, “Computational approaches to motor learning by imitation,,” pp. 199-218, No.1431, Oxford University Press, 2004.
  9. [9] Y. Takahashi, Y. Tamura, M. Asada, and M. Negrello, “Emulation and behavior understanding through shared values,” Robotics and Autonomous Systems, Vol.58, pp. 855-865, July 2010.
  10. [10] K. Hamahata, T. Taniguchi, K. Sakakibara, I. Nishikawa, K. Tabuchi, and T. Sawaragi, “Effective integration of imitation learning and reinforcement learning by generating internal reward,” Proc. of the 2008 8th Int. Conf. on Intelligent Systems Design and Applications, Vol.3, ISDA’08, Washington, DC, USA, pp. 121-126, IEEE Computer Society, 2008.
  11. [11] N. Kohl and P. Stone, “Policy gradient reinforcement learning for fast quadrupedal locomotion,” Proc. of the IEEE Int. Conf. on Robotics and Automation, pp. 2619-2624, 2004.
  12. [12] S. Takahashi, Y. Takahashi, Y. Maeda, and T. Nakamura, “Kicking motion imitation of inverted-pendulum mobile robot and development of body mapping from human demonstrator,” J. of Advanced Computational Intelligence and Intelligent Informatics (JACIII), Vol.15, No.8, pp. 1030-1038, 2011.
  13. [13] Y. Takahashi, T. Kimura, Y. Maeda, and T. Nakamura, “Body mapping from human demonstrator to inverted-pendulum mobile robot for learning from observation,” Proc. of WCCI 2012 IEEE World Congress on Computational Intelligence, pp. 945-950, 2012.
  14. [14] Vstone Co., Ltd., “PRODUCTS – Vstone Co., Ltd.” Available at: http://www.vstone.co.jp/english/products.html. [Accessed July 3, 2015]

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 19, 2024