single-jc.php

JACIII Vol.23 No.3 pp. 519-527
doi: 10.20965/jaciii.2019.p0519
(2019)

Paper:

Human Posture Recognition for Estimation of Human Body Condition

Wei Quan*, Jinseok Woo*, Yuichiro Toda**, and Naoyuki Kubota*

*Graduate School of Systems Design, Tokyo Metropolitan University
6-6 Asahigaoka, Hino, Tokyo 191-0055, Japan

**Graduate School of Natural Science and Technology, Okayama University
3-1-1 Tsushima-Naka, Kita, Okayama, Okayama 700-8530, Japan

Received:
November 30, 2018
Accepted:
December 25, 2018
Published:
May 20, 2019
Keywords:
human posture recognition, growing neural gas, particle swarm optimization, human-robot interaction
Abstract

Human posture recognition has been a popular research topic since the development of the referent fields of human-robot interaction, and simulation operation. Most of these methods are based on supervised learning, and a large amount of training information is required to conduct an ideal assessment. In this study, we propose a solution to this by applying a number of unsupervised learning algorithms based on the forward kinematics model of the human skeleton. Next, we optimize the proposed method by integrating particle swarm optimization (PSO) for optimization. The advantage of the proposed method is no pre-training data is that required for human posture generation and recognition. We validate the method by conducting a series of experiments with human subjects.

Cite this article as:
W. Quan, J. Woo, Y. Toda, and N. Kubota, “Human Posture Recognition for Estimation of Human Body Condition,” J. Adv. Comput. Intell. Intell. Inform., Vol.23 No.3, pp. 519-527, 2019.
Data files:
References
  1. [1] Y. Tang, H. Vu, P. Le, D. Masano, O. Thet, C. Fatichah, Z. Liu, M. Yamaguchi, M. Tangel, F. Dong, Y. Yamazaki, and K. Hirota, “Multimodal Gesture Recognition for Mascot Robot System Based on Choquet Integral Using Camera and 3D Accelerometers Fusion,” J. Adv. Comput. Intell. Intell. Inform., Vol.15, No.5, pp. 563-572, 2011.
  2. [2] J. J. Cabibihan, W.-C. So, and S. Pramanik, “Human-recognizable robotic gestures,” IEEE Trans. on Autonomous Mental Development, Vol.4, Issue 4, pp. 305-314, 2012.
  3. [3] T. Fong, I. Nourbakhsh, and K. Dautenhahn, “A survey of socially interactive robots,” Robotics and Autonomous Systems, Vol.42, No.3-4, pp. 143-166, 2003.
  4. [4] Y. Takahashi, K. Yoshida, F. Hibino, and Y. Maeda, “Human Pointing Navigation Interface for Mobile Robot with Spherical Vision System,” J. Adv. Comput. Intell. Intell. Inform., Vol.15, No.7, pp. 869-877, 2011.
  5. [5] E. E. Stone and M. Skubic, “Fall detection in homes of older adults using the microsoft kinect,” IEEE J. of Biomedical and Health Informatics, Vol.19, Issue 1, pp. 290-301, 2014.
  6. [6] W. Song, Y. Mae, and M. Minami, “Evolutionary Pose Measurement by Stereo Model Matching,” J. Adv. Comput. Intell. Intell. Inform., Vol.9, No.2, pp. 150-158, 2005.
  7. [7] M. Ye, Q. Zhang, L. Wang, J. Zhu, R. Yang, and J. Gall, “A survey on Human Motion Analysis from Depth Data,” M. Grzegorzek, C. Theobalt, R. Koch, and A. Kolb (Eds.), Time-of-Flight and Depth Imaging: Sensors, Algorithm and Applications, pp. 149-187, Springer, 2013.
  8. [8] C. Lee, H. Song, B. Choi, and Y.-S. Ho, “3D scene capturing using stereoscopic cameras and a time-of-flight camera,” IEEE Trans. on Consumer Electron., Vol.57, No.3, pp. 1370-1376, 2011.
  9. [9] J. Jung, J. Y. Lee, Y. Jeong, and I. S. Kweon, “Time-of-flight sensor calibration for a color and depth camera pair,” IEEE Trans. Pattern. Anal. Mach. Intell., Vol.37, No.7, pp. 1501-1513, 2015.
  10. [10] S. Foix, G. Alenya, and C. Torras, “Lock-in time-of-flight (ToF) cameras: a survey,” IEEE Sens J., Vol.11, No.9, pp. 1917-1926, 2011.
  11. [11] A. Kleinsmith and N. Bianchi-Berthouze, “Affective body expression perception and recognition: A survey,” IEEE Trans. Affect. Comput., Vol.4, No.1, pp. 15-33, 2013.
  12. [12] M. Hayase and S. Shimada, “Posture Estimation of Human Body Based on Connection Relations of 3D Ellipsoidal Models,” J. Adv. Comput. Intell. Intell. Inform., Vol.14, No.6, pp. 638-644, 2010.
  13. [13] J. Suarez and R. R. Murphy, “Hand gesture recognition with depth images: A review,” Proc. 2012 IEEE RO-MAN: The 21st IEEE Int. Symp. on Robot and Human Interactive Communication, pp. 411-417, 2012.
  14. [14] J. K. Aggarwal and M. S. Ryoo, “Human activity analysis: A review,” ACM Computing Surveys, Vol.43, No.3, pp. 16:1-16:43, 2011.
  15. [15] J. Shotton et al., “Real-Time Human Pose Recognition in Parts from a Single Depth Image,” Proc. IEEE Conf. Computer Vision and Pattern Recognition (CVPR 2011), pp. 1297-1304, 2011.
  16. [16] L. A. Schwarz, A. Mkhitaryan, D. Mateus, and N. Navab, “Human skeleton tracking from depth data using geodesic distances and optical flow,” Image Vision Comput., Vol.30, pp. 217-226, 2012.
  17. [17] M. Stommel, M. Beetz, and W. Xu, “Model-free detection encoding retrieval and visualization of human poses from kinect data,” IEEE/ASME Trans. Mechatronics, Vol.20, No.2, pp. 865-875, 2015.
  18. [18] B. Fritzke, “Self-organizing network that can follow non-stationary distributions,” Proc. of the Int. Conf. on Artificial Neural Networks ’97, pp. 613-618, Springer, 1997.
  19. [19] Y. Toda, H. Yu, Z. Ju, N. Takesue, K. Wada, and N. Kubota, “Real-time 3D point cloud segmentation using growing neural gas with utility,” The 9th Int. Conf. on Human System Interaction, pp. 418-422, 2016.
  20. [20] J. Denavit and R. S. Hartenberg, “A kinematic notation for lower-pair mechanisms based on matrices,” Trans. ASME J. Appl. Mech, Vol.22, pp. 215-221, 1955.
  21. [21] R. S. Hartenberg and J. Denavit, “Kinematic synthesis of linkages,” McGraw-Hill Series in Mechanical Engineering, p. 435, McGraw-Hill, 1965.
  22. [22] J. Kennedy and R. Eberhart, “Particle Swarm Optimization,” Proc. of IEEE Int. Conf. on Neural Networks, pp. 1942-1948, 1995.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Oct. 01, 2024