Paper:
An Interactive System with Facial Expression Recognition
Yuyi Shang, Mie Sato, and Masao Kasuga
Graduate School of Engineering, Utsunomiya University, 7-1-2 Yoto, Utsunomiya, Tochigi 321-8585, Japan
To make communication between users and machines more comfortable, we focus on facial expressions and automatically classify them into 4 expression candidates: “joy,” “anger, ” “sadness,” and “surprise.” The classification uses features that correspond to expression-motion patterns, and then voice data is output based on classification results. When we output voice data, insufficiency in classification is taken into account. We choose the first and second expression candidates from classification results. To realize interactive communication between users and machines, information on these candidates is used when we access a voice database. The voice database contains voice data corresponding to emotions.
This article is published under a Creative Commons Attribution-NoDerivatives 4.0 Internationa License.