Paper:
Acquisition of Behavioral Patterns Depends on Self-Embodiment Based on Robot Learning Under Multiple Instructors
Masato Kotake*,**, Daisuke Katagami*, and Katsumi Nitta*
*Interdisciplinary Graduate School of Science and Engineering, Tokyo Institute of Technology, 4259 Nagatsuta-cho, Midori-ku, Yokohama, 226-8503, Japan
**Daikin Industries, Ltd., 1304 Kanaokacho, Kitaku, Sakaishi, Osaka
- [1] S. Yamada and T. Yamaguchi, “Training AIBO like a Dog,” The 13th Int. Workshop on Robot and Human Interactive Communication, pp. 431-436, 2004.
- [2] T. Inamura, M. Inabe, and H. Inoue, “User adaptation of humanrobot interaction model based on Bayesian network and introspection of interaction experience,” Int. Conf. on Intelligent Robots and Systems, pp. 2139-2144, 2000.
- [3] K. Dautenhahn and C. L. Nehaniv (Eds.), “Imitation in Animals and Artifacts,” MIT Press, 2002.
- [4] C. L. Nehaniv and K. Dautenhahn (Eds.), “Imitation and Social Learning in Robots, Humans and Animals: Behavioural, Social and Communicative Dimensions,” Cambridge Univ. Press, 2007.
- [5] K. Terada, Y. Ohmura, and Y. Kuniyoshi, “Analysis and Control of Whole Body Dynamic Humanoid Motion – Towards Experiments on a Roll-and-Rise Motion,” Int. Conf. on Intelligent Robots and Systems, 2003.
- [6] K. Ogawara, J. Takamatsu, H. Kimura, and K. Ikeuchi, “Extraction of Essential Interactions Through Multiple Observations of Human Demonstrations,” IEEE Trans. on Industrial Electronics, Vol.50, No.4, pp. 667-675, 2003.
- [7] W. Takano and Y. Nakamura, “Segmentation of human behavior patterns based on the probabilistic correlation,” The 19th Annual Conf. of the Japanese Society for Artificial Intelligence, 2005.
- [8] A. Alissandrakis, C. L. Nehaniv, and K. Dautenhahn, “Imitation With ALICE: Learning to Imitate Corresponding Actions Across Dissimilar Embodiments,” IEEE Trans. Systems, Man fJ Cybernetics: Part A, Vol.32, No.4, pp. 482-496, 2002.
- [9] A. Alissandrakis, C. L. Nehaniv, and K. Dautenhahn, “Correspondence Mapping Induced State and Action Metrics for Robotic Imitation,” IEEE Trans. Systems, Man & Cybernetics: Part B, Special issue on Robot Learning by Observation, Demonstration and Imitation, Vol.37, No.2, pp. 299-307, 2007.
- [10] M. Kotake, D. Katagami, and K. Nitta, “Acquisition of Motion Skills by Multiple Human Teachings,” Joint 3rd Int. Conf. on Soft Computing and Intelligent Systems and 7th Int. Symposium on advanced Intelligent Systems, pp. 1048-1053, 2006.
- [11] H. Sakoe and S. Chiba, “Dynamic Programing Algorithm Optimization for Spoken Word Recognition,” IEEE Transaction on Acoustics, Speech, and Signal Processing, Vol.ASSP-26, No.1, pp. 43-49, 1978.
- [12] Y. Yamada, E. Suzuki, H. Yokoi, and K. Takabayashi, “Decisiontree Induction from Time-series Data Based on a Standard-example Split Test,” Proc. Twentieth Int. Conf. on Machine Learning (ICML), pp. 840-847, 2003.
- [13] C. Isbell, C. Shelton, M. Kearns, S. Singh, and P. Stone, “A Social Reinforcement Learning Agent,” In Proc. of the Fifth Int. Conf. on Autonomous Agents, pp. 377-384, ACM Press, 2001.
This article is published under a Creative Commons Attribution-NoDerivatives 4.0 Internationa License.