Posture Estimation of a Human Body from Thermal Images of 2D Appearance Models of 3D Ellipsoidal Model
Mitsuhiro Hayase* and Susumu Shimada**
*Graduate School of Computer and Cognitive Science, Chukyo University
**School of Information Science and Technology, Chukyo University 101 Tokodachi, Kaizu-cho, Toyota, 470-0393 Japan
We propose a new model-based recognition method that involves the use of three-dimensional (3D) ellipsoidal models in various sizes and proportions as well as their two-dimensional (2D) appearance models. Most model-based vision is intended to recognize specified objects, and the model is specific to the object. However, our method can recognize various proportions of objects and was applied in posture estimation of the human body from thermal images.
-  H. Sidenbladh, M.J. Black, and D.J. Fleet, “Stochastic Tracking of 3D Human Figures Using 2D Image Motion,” European Conf. on Computer Vision, D. Vernon (Ed.), Springer Verlag, LNCS 1843, Dublin, Ireland, pp. 702-718, 2000.
-  D. Ormoneit, H. Sidenbladh, M.J. Black, and T. Hastie, “Learning and Tracking Cyclic Human Motion,” Advances in Neural Information Processing Systems 13, Leen, Todd K. and Dietterich, Thomas G. and Tresp, Volker (Eds.), The MIT Press, pp. 894-900, 2001.
-  H. Jiang, Z.N. Li, and M.S.Drew, “Human Posture Recognition with Convex Programming,” Proc. IEEE Conf. on Multimedia and Expo (ICME 2005), 2005.
-  A. Fathi and G. Mori, “Human Pose Estimation using Motion Exemplars,” IEEE Int. Conf. on Computer Vision (ICCV 2007), 2007.
-  Y. Wang and G. Mori, “Multiple Tree Models for Occlusion and Spatial Constraints in Human Pose Estimation,” European Conf. on Computer Vision (ECCV 2008), 2008.
-  S. Shimada, “Diagrammatic Reasoning with Time-Space Model,” IPSJ SIG Notes. ICS, Vol.95, No.48, pp. 45-50, 1995 (in Japanese).
-  M. Hayase and S. Shimada, “A Posture Estimation of a Human Body by 2-D projected images of a 3-D Model,” Proc. of the IEICE General Conf., Vol.2007, No.2, pp. 187, 2007 (in Japanese).
This article is published under a Creative Commons Attribution-NoDerivatives 4.0 International License.