Paper:
Views over last 60 days: 567
An Interactive System with Facial Expression Recognition
Yuyi Shang, Mie Sato, and Masao Kasuga
Graduate School of Engineering, Utsunomiya University, 7-1-2 Yoto, Utsunomiya, Tochigi 321-8585, Japan
Received:February 24, 2005Accepted:June 7, 2005Published:November 20, 2005
Keywords:interactive system, facial expression recognition, feature extraction, natural language
Abstract
To make communication between users and machines more comfortable, we focus on facial expressions and automatically classify them into 4 expression candidates: “joy,” “anger, ” “sadness,” and “surprise.” The classification uses features that correspond to expression-motion patterns, and then voice data is output based on classification results. When we output voice data, insufficiency in classification is taken into account. We choose the first and second expression candidates from classification results. To realize interactive communication between users and machines, information on these candidates is used when we access a voice database. The voice database contains voice data corresponding to emotions.
Cite this article as:Y. Shang, M. Sato, and M. Kasuga, “An Interactive System with Facial Expression Recognition,” J. Adv. Comput. Intell. Intell. Inform., Vol.9 No.6, pp. 637-642, 2005.Data files: