single-jc.php

JACIII Vol.15 No.5 pp. 563-572
doi: 10.20965/jaciii.2011.p0563
(2011)

Paper:

Multimodal Gesture Recognition for Mascot Robot System Based on Choquet Integral Using Camera and 3D Accelerometers Fusion

Yongkang Tang*, Hai An Vu*, Phuc Q. Le*, Daisuke Masano*,
OoHan Thet*, Chastine Fatichah*, Zhentao Liu*,
Masashi Yamaguchi*, Martin Leonard Tangel*,
Fangyan Dong*, Yoichi Yamazaki**, and Kaoru Hirota*

*Department of Computational Intelligence and Systems Science, Tokyo Institute of Technology, G3-49, 4259 Nagatsuta, Midori-ku, Yokohama, Kanagawa 226-8502, Japan

**Department of Electrical, Electronic & Information Engineering, Faculty of Engineering, Kanto Gakuin University, 1-50-1 Mutsuura-higashi, Kanazawa-ku, Yokohama, Kanagawa 236-8501, Japan

Received:
November 20, 2010
Accepted:
February 28, 2011
Published:
July 20, 2011
Keywords:
gesture recognition, sensor fusion, Choquet integral, human-robot interaction, 3D accelerometer
Abstract
A multimodal gesture recognition method for mascot robot system is proposed based on Choquet integral by fusing camera and 3D accelerometer data. The optimization of two fuzzy measures in the training phase for two recognition units, i.e., camera-based and accelerometer-based units, obtains enough recognition rate of 92.7% for 8 types of gestures by improving the recognition rate approximate 20% compared with that of each unit. The proposed method targets casual communication from humans to robots by integrating nonverbal gesture messages and verbal messages.
Cite this article as:
Y. Tang, H. Vu, P. Le, D. Masano, O. Thet, C. Fatichah, Z. Liu, M. Yamaguchi, M. Tangel, F. Dong, Y. Yamazaki, and K. Hirota, “Multimodal Gesture Recognition for Mascot Robot System Based on Choquet Integral Using Camera and 3D Accelerometers Fusion,” J. Adv. Comput. Intell. Intell. Inform., Vol.15 No.5, pp. 563-572, 2011.
Data files:
References
  1. [1] Y. Yamazaki, H. A. Vu et al., “Mascot Robot System by Integrating Eye Robot and Speech Recognition Using RT Middleware and its Casual Information Recommendation,” 3rd Int. Symposium on Computational Intelligence and Industrial Applications (ISCIIA2008), pp. 375-384, 2008.
  2. [2] H. A. Vu, Y. Yamazaki et al., “The Interrupt of Mascot Robot System Embedded in RT Middleware Based on Fuzzy Logic,” FACTA UNIVERSITATIS Series: Mechanics, Automatic Control and Robotics, Vol.7, No.1, pp. 11-28, 2009.
  3. [3] C. Shan, T. Tan et al., “Real-time Hand Tracking Using aMean Shift Embedded Particle Filter,” J. of the Pattern Recognition Society, 1958-1970, 2007.
  4. [4] E. Huber and D. Kortenkamp, “A Behavior Based Approach to Active Stereo Vision for Mobile Robots,” Engineering Applications of Artificial Intelligence J., pp. 229-243, 1998.
  5. [5] R. Douglas-Cowie, E. Tsapatsoulis et al., “Emotion Recognition in Human-Computer Interaction,” Signal Process Magazine, IEEE, Vol.18, No.1, pp. 32-80, 2001.
  6. [6] A. Vinciarelli, M. Pantic et al., “Social Signal Processing: Stateof-the-Art and Future Perspectives of an Emerging Domain,” ACM Multimedia, pp. 1061-1070, 2008.
  7. [7] Z. Othman, A. R. Yaakub et al., “Virtual Environment Navigation Using an Image based Approach,” Student Conf. on Research and Development, pp. 364-367, 2002.
  8. [8] C. Keskin and L. Akarun, “STARS: Sign Tracking and Recognition System Using Input-Output HMMs,” Patter Recognition Letters, Vol.30, pp. 1086-1095, 2009.
  9. [9] H. Kang, C. W. Lee et al., “Recognition Based Gesture Spotting in Video Games,” Pattern Recognition Letters, Vol.25, pp. 1701-1714, 2004.
  10. [10] G. Caridakis, K. Karpouzis et al., “SOMM: Self Organizing Markov Map for Gesture Recognition,” Pattern Recognition Letters, Vol.31, pp. 52-59, 2010.
  11. [11] V. Paquin and P. Cohen, “A Vision Based Gestural Guidance Interface for Mobile Robotic Platforms,” HCI/ECCV, LNCS, Vol.3058, pp. 39-47, 2004.
  12. [12] C. Amma, D. Gehrig et al., “Airwriting Recognition Using Wearable Motion Sensors,” Augmented Human Conf., 2010.
  13. [13] Y. Yamazaki, H. A. Vu et al., “Gesture Recognition Using Combination of Acceleration Sensor and Images for Casual Communication between Robots and Humans,” IEEE Word Congress on Computational Intelligence, Vol.18, No.23, 2010.
  14. [14] T. Murofushi and M. Sugeno, “An Interpretation of Fuzzy Measures and the Choquet Integral as an Integral with Respect to a Fuzzy Measure,” Fuzzy Sets and Systems 29, pp. 201-227, 1989.
  15. [15] T. Nakamura, K. Taki et al., “AMSS: A Similarity Measure for Time Series Data,” IEICE D, Vol.91-D-1, pp. 2579-2588, 2008.
  16. [16] Micro Stone, MVP-RF8,
    http://www.microstone.co.jp/
  17. [17] M. H. Ko, G. West et al., “Using Dynamic Time Warping for Online Temporal Fusion in Multisensory Systems,” Information Fusion, Vol. 9, pp. 370-388, 2008.
  18. [18] Z. Liu, F. Dong et al., “Proposal of Fuzzy Atmosfield for Mood Expression of Human-Robot Communication,” Int. Symposium on Intelligent Systems (iFAN), 2010.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Dec. 13, 2024