JACIII Vol.9 No.6 pp. 637-642
doi: 10.20965/jaciii.2005.p0637


An Interactive System with Facial Expression Recognition

Yuyi Shang, Mie Sato, and Masao Kasuga

Graduate School of Engineering, Utsunomiya University, 7-1-2 Yoto, Utsunomiya, Tochigi 321-8585, Japan

February 24, 2005
June 7, 2005
November 20, 2005
interactive system, facial expression recognition, feature extraction, natural language
To make communication between users and machines more comfortable, we focus on facial expressions and automatically classify them into 4 expression candidates: “joy,” “anger, ” “sadness,” and “surprise.” The classification uses features that correspond to expression-motion patterns, and then voice data is output based on classification results. When we output voice data, insufficiency in classification is taken into account. We choose the first and second expression candidates from classification results. To realize interactive communication between users and machines, information on these candidates is used when we access a voice database. The voice database contains voice data corresponding to emotions.
Cite this article as:
Y. Shang, M. Sato, and M. Kasuga, “An Interactive System with Facial Expression Recognition,” J. Adv. Comput. Intell. Intell. Inform., Vol.9 No.6, pp. 637-642, 2005.
Data files:

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Jul. 12, 2024