Extracting Multimodal Dynamics of Objects Using RNNPB
Tetsuya Ogata*, Hayato Ohba*, Jun Tani**,
Kazunori Komatani*, and Hiroshi G. Okuno*
*Graduate School of Informatics, Kyoto University, Kyoto, Japan
**Brain Science Institute, RIKEN, Saitama, Japan
Dynamic features play an important role in recognizing objects that have similar static features in color or shape. This paper focuses on active sensing that exploits the dynamic feature of an object. An extended version of the robot, Robovie-IIs, uses its arms to move an object and determine its dynamic features. At issue is how to extract symbols from different temporal states of the object. We use a recurrent neural network with parametric bias (RNNPB) that generates self-organized nodes in parametric bias space. We trained an RNNPB with 42 neurons using data on sounds, trajectories, and tactile sensors generated while the robot was moving or hitting an object with its arm. Clusters of 20 types of objects were self-organized. Experiments with unknown (untrained) objects showed that our proposal configured them appropriately in PB space, demonstrating its generalization.
Kazunori Komatani, and Hiroshi G. Okuno, “Extracting Multimodal Dynamics of Objects Using RNNPB,” J. Robot. Mechatron., Vol.17, No.6, pp. 681-688, 2005.
-  R. Bajcsy, “Active Perception,” IEEE Proceedings, Special issue on Computer Vision, Vol.76, No.8, pp. 996-1005, 1988.
-  K. Noda, M. Suzuki, N. Tsuchiya, Y. Suga, T. Ogata, and S. Sugano, “Robust modeling of dynamic environment based on robot embodiment,” IEEE ICRA 2003, pp. 3565-3570, 2003.
-  A. Arsenio, and P. Fitzpatrick, “Exploiting cross-modal rhythm for robot perception of objects,” Int. Conf. on Computational Intelligence, Robotics, and Autonomous Systems, 2003.
-  R. Fukano, Y. Kuniyoshi, T. Kobayashi, and T. Otani, “Statistical Manipulation Learning of Unknown Objects by a Multi-Fingered Robot Hand,” Humanoids 2004, paper #65, 2004.
-  P. Dario, M. Rucci, C. Guadagnini, and C. Laschi, “Integrating Visual and Tactile Information in Disassembly Tasks,” Int. Conf. on Advanced Robotics, pp. 191-196, 1993.
-  T. Kohonen, “Self-Organizing Maps,” Springer Series in Information Science, Vol.30, Springer, Berlin, Heidelberg, New York, 1995.
-  L. Lin, and T. Mitchell, “Efficient Learning and Planning within the Dynamic Framework,” SAB’92, pp. 281-290, 1992.
-  J. Tani, and M. Ito, “Self-Organization of Behavioural Primitives as Multiple Attractor Dynamics: A Robot Experiment,” IEEE Transactions on SMC Part A, Vol.33, No.4, pp. 481-488, 2003.
-  T. Ogata, M. Matsunaga, S. Sugano, and J. Tani, “Human Robot Collaboration Using Behavioral Primitives,” IEEE/RSJ IROS 2004, pp. 1592-1597, 2004.
-  M. Jordan, “Attractor dynamics and parallelism in a connectionist sequential machine,” Eighth Annual Conference of the Cognitive Science Society (Erlbaum, Hillsdale, NJ), pp. 513-546, 1986.
-  D. Rumelhart, G. Hinton, and R. Williams, “Learning internal representation by error propagation,” in D.E. Rumelhart and J. L. McLelland (editors), Parallel Distributed Processing (Cambridge, MA: MIT Press), 1986.
-  H. Ishiguro, T. Ono, M. Imai, T. Maeda, T. Kanda, and R. Nakatsu, “Robovie: an interactive humanoid robot,” Int. Journal of Industrial Robotics, Vol.28, No.6, pp. 498-503, 2001.
-  T. Miyashita, T. Tajika, K. Shinozawa, H. Ishiguro, K. Kogure, and N. Hagita, “Human Position and Posture Detection based on Tactile Information of the Whole Body,” IEEE/RSJ IROS 2004 Work Shop, 2004.
-  R. Pfeifer, and C. Scheier, “Understanding Intelligence, Cambridge,” MA: MIT Press, 1999.
-  G. Metta, and P. Fitzpatrick, “Better Vision through Manipulation,” Adaptive Behavior, Vol.11, No.2, pp. 109-128, 2003.
-  A. Waibel, T. Hanazawa, K. Hinton, K. Shikano, and K. Lang, “Phoneme Recognition Using Time-Delay Neural Networks,” ATR Technical Report TR-1-0006, 1987.
-  M. Haruno, D. Wolpert, and M. Kawato, “MOSAIC model for sensorimotor learning and control,” Neural Computation 13, pp. 2201-2220, 2001.
Copyright© 2005 by Fuji Technology Press Ltd. and Japan Society of Mechanical Engineers. All right reserved.