JRM Vol.17 No.6 pp. 672-680
doi: 10.20965/jrm.2005.p0672


Human-Like Daily Action Recognition Model

Taketoshi Mori, and Kousuke Tsujioka

The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan

January 28, 2005
May 13, 2005
December 20, 2005
behavior understanding, action recognition, human modeling, motion capture, action description

This paper proposes a human-like action recognition model. When the model is implemented as a system, the system recognizes human actions similarly to human beings recognize. The recognition algorithm is constructed taking account of the following characteristics of human action recognition: simultaneous recognition, priority between actions, judgement fuzziness, multiple judge conditions for one action, and recognition ability from partial view of the body. The experiments based on a comparison with completed questionnaires demonstrated that the system recognizes human action the way like a human being does. Results ensure natural understanding of human action by a system, which leads to smooth communication between computer systems and human beings.

Cite this article as:
Taketoshi Mori and Kousuke Tsujioka, “Human-Like Daily Action Recognition Model,” J. Robot. Mechatron., Vol.17, No.6, pp. 672-680, 2005.
Data files:
  1. [1] J. O’Rourke, and N. Badler, “Model-based image analysis of human motion using constraint propagation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.2, No.6, pp. 522-536, 1980.
  2. [2] G. Johansson, “Visual Perception of Biological Motion and a Model for its Analysis,” Perception and Psychophysics, pp. 201-211, 1973.
  3. [3] I. Kakadiaris, and D. Metaxes, “Model-Based Estimation of 3D Human Motion with Occlusion Based on Active Multi-Viewpoint Selection,” Proc. IEEE Conference on Computer Vision and Pattern Recognition, pp. 81-87, 1996.
  4. [4] J. Ohya, J. Yamato, and K. Ishii, “Recognizing human action in time-sequential images using hidden markov model,” IEEE Computer Vision and Pattern Recognition, pp. 379-385, 1992.
  5. [5] J. Davis, and A. Bobick, “The representation and recognition of action using temporal templates,” Proc. IEEE Conference on Computer vision and Pattern Recognition, Puerto Rico, pp. 928-934, June, 1997.
  6. [6] K. Takahashi, S. Seki, and H. Kojima, “Spotting recognition of human gestures from time-varying images,” Transactions of the Institute of Electronics, Information and Communication Engineers D-II, Vol.8, pp. 1552-1561, 1994.
  7. [7] C. Wren, A. Azarbayejani, T. Darrel, and A. Pentland, “Pfinder: Real-time tracking of the human body,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.19, No.7, pp. 780-785, July, 1997.
  8. [8] J. K. Aggarwal, and Q. Cai, “Human Motion Analysis: A Review,” Proc. IEEE Workshop on NonRigid and Articulated Motion, San Juan, Puerto Rico, pp. 90-102, June, 1997.
  9. [9] Proceedings of IEEE International Conference on Automatic Face and Gesture Recognition, 1998-2004.
  10. [10] M. Shimosaka, T. Mori, T. Harada, and T. Sato, “Recognition of Human Daily Life Action and Its Performance Adjustment based on Support Vector Learning,” The 3rd IEEE-RAS International Conference on Humanoid Robots, pp. 1-17, Sep., 2003.
  11. [11] T. Inamura, Y. Nakamura, and I. Toshima, “Embodied Symbol Emergence based on Mimesis Theory,” International Journal of Robotics Research, Vol.23, No.4, pp. 363-377, 2004.
  12. [12] T. Mori, K. Tsujioka, M. Shimosaka, and T. Sato, “Human-like Action Recognition System Using Features Extracted by Human,” Proceedings of the 2002 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1214-1220, Oct., 2002.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Mar. 05, 2021