single-rb.php

JRM Vol.15 No.3 pp. 271-277
doi: 10.20965/jrm.2003.p0271
(2003)

Paper:

HMM-based Temporal Difference Learning with State Transition Updating for Tracking Human Communicational Behaviors

Minh Anh T. Ho, Yoji Yamada, and Yoji Umetani

Intelligent Systems Laboratory, Graduate School of Toyota Technological Institute, 2-12 Hisakata, Tenpa-ku, Nagoya, 468-2511 Japan

Received:
November 11, 2002
Accepted:
March 4, 2003
Published:
June 20, 2003
Keywords:
visual tracking, intended gestures, hidden Markov model, reinforcement learning, state transition update
Abstract

In our original system, we used hidden Markov models (HMMs) to model rough gesture patterns. We later utilized temporal difference (TD) learning to adjust the action model of the tracker for its behavior in the tracking task. We integrated the above two methods into an algorithm by assigning state transition probability in HMMs as a reward in TD learning. Identification of the sign gesture context through wavelet analysis autonomously provides a reward value for optimizing the attentive visual attentive tracker’s AVAT’s action patterns. A bound of state value functions as a constraint factor for the updating procedure in TD models has been determined to recognize whether predictive models need to be updated according with action models. Experimental results of extracting an operator’s hand sign sequence during natural walking demonstrates AVAT development in the perceptual organization framework.

Cite this article as:
Minh Anh T. Ho, Yoji Yamada, and Yoji Umetani, “HMM-based Temporal Difference Learning with State Transition Updating for Tracking Human Communicational Behaviors,” J. Robot. Mechatron., Vol.15, No.3, pp. 271-277, 2003.
Data files:

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Jun. 24, 2021