Real-time Auditory and Visual Multiple-speaker Tracking For Human-robot Interaction
Kazuhiro Nakadai*, Ken-ichi Hidai**, Hiroshi G. Okuno*,***, Hiroshi Mizoguchi**** and Hiroaki Kitano*,*****
*Kitano Symbiotic Systems Project, ERATO, Japan Science and Technology Corp. Mansion 31 Suite 6A, 6-31-15, Jingumae, Shibuya-ku, Tokyo, 150-0001 Japan
**Digital Creatures Laboratory, Sony Corp.
***Graduate School of Informatics, Kyoto University
****Department of Mechanical Engineering, Tokyo University of Science
*****Sony Computer Science Laboratories, Inc.
This paper addresses real-time multiple speaker tracking because it is essential in robot perception and human-robot social interaction. The difficulty lies in treating a mixture of sounds, occlusion (some speakers are hidden) and real-time processing. Our approach consists of three components: (1) the extraction of the direction of each speaker by using interaural phase difference and interaural intensity difference, (2) the resolution of each speakers direction by multimodal integration of audition, vision and motion with canceling inevitable motor noises in motion in case of an unseen or silent speaker, and (3) the distributed implementation to three PCs connected by TCP/IP network to attain real-time processing. As a result, we attain robust real-time speaker tracking with 200 ms delay in a non-anechoic room, even when multiple speakers exist and the tracking person is visually occluded. In addition, the feasibility of social interaction is shown through application of our technique to a receptionist robot and a companion robot at a party.
This article is published under a Creative Commons Attribution-NoDerivatives 4.0 International License.
Copyright© 2002 by Fuji Technology Press Ltd. and Japan Society of Mechanical Engineers. All right reserved.