JRM Vol.20 No.4 pp. 567-577
doi: 10.20965/jrm.2008.p0567


Constructive Approach to Role-Reversal Imitation Through Unsegmented Interactions

Tadahiro Taniguchi*, Naoto Iwahashi**,***, Komei Sugiura***,
and Tetsuo Sawaragi****

*Department of Human & Computer Intelligence, Ritsumeikan University, 1-1-1 Nojihigashi, Kusatsu, Shiga 525-8577, Japan

**National Institute of Information and Communications Technology, 4-2-1 Nukui-Kitamachi, Koganei, Tokyo 184-8795, Japan

***Advanced Telecommunications Research Institute International, 2-2-2 Hikaridai, Seikacho, Sorakugun, Kyoto 619-0288, Japan

****Graduate School of Engineering, Kyoto University, Yoshida Honmachi, Sakyo, Kyoto 606-8501, Japan

February 6, 2008
June 12, 2008
August 20, 2008
imitation learning, self-organized learning, role-reversal imitation, switching linear model, keyword extraction

This paper presents a novel method of a robot learning through imitation to acquire a user’s key motions automatically. The learning architecture mainly consists of three learning modules: a switching autoregressive model (SARM), a keyword extractor without a dictionary, and a keyword selection filter that references to the tutor’s reactions. Most previous research on imitation learning by autonomous robots targeted motions given to robots, were segmented into meaningful parts by the users or researchers in advance. To imitate certain behavior from continuous human motion, however, robots must find segments to be learned. To achieve this goal, the learning architecture converts a continuous time series into a discrete time series of letters using the SARM, finds meaningful segments using the keyword extractor without a dictionary, and removes less s meaningful segments from keywords using the user’s reactions. In experiments, an operator showed unsegmented motions to a robot, and reacted to the motions the robot had acquired. Results showed that this framework enabled the robot to obtain several meaningful motions that the operator hoped it would acquire.

  1. [1] M. Asada, K. MacDorman, H. Ishiguro, and Y. Kuniyoshi, “Cognitive developmental robotics as a new paradigm for the design of humanoid robots,” Robotics and Autonomous Systems, Vol.37, Nos.2-3, pp. 185-193, 2001.
  2. [2] A. Billard, Y. Epars, S. Calinon, S. Schaal, and G. Cheng, “Discovering optimal imitation strategies,” Robotics and Autonomous Systems, Vol.47, Nos.2-3, pp. 69-77, 2004.
  3. [3] B. Chiu, E. Keogh, and S. Lonardi, “Probabilistic discovery of time series motifs,” Proc. of the ninth ACM SIGKDD Int. Conf. on Knowledge discovery and data mining, pp. 493-498, 2003.
  4. [4] K. Church, “Empirical estimates of adaptation: the chance of two noriegas is closer to p/2 than p 2,” Proc. of the 17th Conf. on Computational linguistics, Vol.1, pp. 180-186, 2000.
  5. [5] P. Dunham and F. Dunham, “Effects of mother-infant social interactions on infants’ subsequent contingency task performance,” Child Development, Vol.61, No.3, pp. 785-793, 1990.
  6. [6] T. Inamura, I. Toshima, H. Tanie, and Y. Nakamura, “Embodied symbol emergence based on mimesis theory,” Int. Journal of Robotics Research, Vol.23, No.4, pp. 363-377, 2004.
  7. [7] M. Ito, K. Noda, Y. Hoshino, and J. Tani, “Dynamic and interactive generation of object handling behaviors by a small humanoid robot using a dynamic neural network model,” Neural Networks, Vol.19, No.3, pp. 323-337, 2006.
  8. [8] J. Lin, E. Keogh, S. Lonardi, and P. Patel, “Finding Motifs in Time Series,” Proc. of the 2nd Workshop on Temporal Data Mining, pp. 53-68, 2002.
  9. [9] C. Moore, “Joint Attention: Its Origins and Role in Development,” Lawrence Erlbaum Assoc Inc., March, 1995.
  10. [10] K. Murphy, “Switching Kalman filters,” Dept. of Computer Science, University of California, Berkeley, Tech. Rep., 1998.
  11. [11] C. Nehaniv and K. Dautenhahn, “The correspondence problem,” Imitation in Animals and Artifacts, pp. 41-61, 2002.
  12. [12] M. Okada, K. Tatani, and Y. Nakamura, “Polynomial design of the nonlinear dynamics for the brain-like information processing of whole body motion,” Robotics and Automation, 2002, Proc., ICRA’02, IEEE Int. Conf. on, Vol.2, 2002.
  13. [13] K. Sugiura and N. Iwahashi, “Learning Object-Manipulation Verbs for Human-Robot Communication,” Workshop on Multimodal Interfaces in Semantic Interaction, Int. Conf. on Multimodal Interfaces, 2007.
  14. [14] Y. Tanaka, K. Iwamoto, and K. Uehara, “Discovery of Time-Series Motif from Multi-Dimensional Data Based on MDL Principle,” Machine Learning, Vol.58, No.2, pp. 269-300, 2005.
  15. [15] T. Taniguchi and T. Sawaragi, “Incremental acquisition of behaviors and signs based on a reinforcement learning schemata model and a spike timing-dependent plasticity network,” Advanced Robotics, Vol.21, No.10, pp. 1177-1199, 2007.
  16. [16] M. Tomasello, “The Cultural Origins of Human Cognition,” Harvard Univ Pr, reprint edition, March, 2001.
  17. [17] K. Umemura, “Related Word-pairs Extraction without Dictionaries,” Technical report, IPA Exploratory Software Project development result report, 2000 (in Japanese)
  18. [18] J. Watson, “Contingency perception in early social development,” Social perception in infants, pp. 157-176, 1985.
  19. [19] R. Yokoya, T. Ogata, J. Tani, K. Komatani, and H. Okuno, “Experience Based Imitation Using RNNPB,” Intelligent Robots and Systems, 2006 IEEE/RSJ Int. Conf. on, pp. 3669-3674, 2006.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, IE9,10,11, Opera.

Last updated on Sep. 21, 2017