JACIII Vol.11 No.8 pp. 989-997
doi: 10.20965/jaciii.2007.p0989


Acquisition of Behavioral Patterns Depends on Self-Embodiment Based on Robot Learning Under Multiple Instructors

Masato Kotake*,**, Daisuke Katagami*, and Katsumi Nitta*

*Interdisciplinary Graduate School of Science and Engineering, Tokyo Institute of Technology, 4259 Nagatsuta-cho, Midori-ku, Yokohama, 226-8503, Japan

**Daikin Industries, Ltd., 1304 Kanaokacho, Kitaku, Sakaishi, Osaka

March 22, 2007
August 6, 2007
October 20, 2007
robot learning, embodiment, teaching by demonstration
We focus on robotic learning under multiple instructors. Even when their goal is the same, different instructors inevitably was different approaches. We propose incorporating DP matching and clustering, classifying the teaching demonstrations of instructors into groups of similar ones. Experiments in which an AIBO robot was taught to walk forward demonstrated that our proposal acquired appropriate teaching approaches based on AIBO’s different embodiments and maximizing task accomplishment.
Cite this article as:
M. Kotake, D. Katagami, and K. Nitta, “Acquisition of Behavioral Patterns Depends on Self-Embodiment Based on Robot Learning Under Multiple Instructors,” J. Adv. Comput. Intell. Intell. Inform., Vol.11 No.8, pp. 989-997, 2007.
Data files:
  1. [1] S. Yamada and T. Yamaguchi, “Training AIBO like a Dog,” The 13th Int. Workshop on Robot and Human Interactive Communication, pp. 431-436, 2004.
  2. [2] T. Inamura, M. Inabe, and H. Inoue, “User adaptation of humanrobot interaction model based on Bayesian network and introspection of interaction experience,” Int. Conf. on Intelligent Robots and Systems, pp. 2139-2144, 2000.
  3. [3] K. Dautenhahn and C. L. Nehaniv (Eds.), “Imitation in Animals and Artifacts,” MIT Press, 2002.
  4. [4] C. L. Nehaniv and K. Dautenhahn (Eds.), “Imitation and Social Learning in Robots, Humans and Animals: Behavioural, Social and Communicative Dimensions,” Cambridge Univ. Press, 2007.
  5. [5] K. Terada, Y. Ohmura, and Y. Kuniyoshi, “Analysis and Control of Whole Body Dynamic Humanoid Motion – Towards Experiments on a Roll-and-Rise Motion,” Int. Conf. on Intelligent Robots and Systems, 2003.
  6. [6] K. Ogawara, J. Takamatsu, H. Kimura, and K. Ikeuchi, “Extraction of Essential Interactions Through Multiple Observations of Human Demonstrations,” IEEE Trans. on Industrial Electronics, Vol.50, No.4, pp. 667-675, 2003.
  7. [7] W. Takano and Y. Nakamura, “Segmentation of human behavior patterns based on the probabilistic correlation,” The 19th Annual Conf. of the Japanese Society for Artificial Intelligence, 2005.
  8. [8] A. Alissandrakis, C. L. Nehaniv, and K. Dautenhahn, “Imitation With ALICE: Learning to Imitate Corresponding Actions Across Dissimilar Embodiments,” IEEE Trans. Systems, Man fJ Cybernetics: Part A, Vol.32, No.4, pp. 482-496, 2002.
  9. [9] A. Alissandrakis, C. L. Nehaniv, and K. Dautenhahn, “Correspondence Mapping Induced State and Action Metrics for Robotic Imitation,” IEEE Trans. Systems, Man & Cybernetics: Part B, Special issue on Robot Learning by Observation, Demonstration and Imitation, Vol.37, No.2, pp. 299-307, 2007.
  10. [10] M. Kotake, D. Katagami, and K. Nitta, “Acquisition of Motion Skills by Multiple Human Teachings,” Joint 3rd Int. Conf. on Soft Computing and Intelligent Systems and 7th Int. Symposium on advanced Intelligent Systems, pp. 1048-1053, 2006.
  11. [11] H. Sakoe and S. Chiba, “Dynamic Programing Algorithm Optimization for Spoken Word Recognition,” IEEE Transaction on Acoustics, Speech, and Signal Processing, Vol.ASSP-26, No.1, pp. 43-49, 1978.
  12. [12] Y. Yamada, E. Suzuki, H. Yokoi, and K. Takabayashi, “Decisiontree Induction from Time-series Data Based on a Standard-example Split Test,” Proc. Twentieth Int. Conf. on Machine Learning (ICML), pp. 840-847, 2003.
  13. [13] C. Isbell, C. Shelton, M. Kearns, S. Singh, and P. Stone, “A Social Reinforcement Learning Agent,” In Proc. of the Fifth Int. Conf. on Autonomous Agents, pp. 377-384, ACM Press, 2001.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Jul. 12, 2024