Development of a Robotic Pet Using Sound Source Localization with the HARK Robot Audition System
Ryo Suzuki, Takuto Takahashi, and Hiroshi G. Okuno
Lambdax Bldg 3F, 2-4-12 Okubo, Shinjuku, Tokyo 169-0072, Japan
-  H. G. Okuno and K. Nakadai, “Robot audition: its rise and perspectives,” 2015 IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP), 2015.
-  N. Quang, S. Yun, and J. Choi, “Audio-visual integration for human-robot interaction in multi-person scenarios,” Proc. of the 2014 IEEE Emerging Technology and Factory Automation (ETFA), 2014.
-  J. Cech, R. Mittal, and A. Deleforge, “Active-speaker detection and localization with microphones and cameras embedded into a robotic head,” 2013 13th IEEE-RAS Int. Conf. on Humanoid Robots (Humanoids), 2013.
-  K. L. Koay, G. Lakatos, D. S. Syrdal, M. Gácsi, B. Bereczky, K. Dautenhahn, A. Miklósi, and M. L. Walters, “Hey! There is someone at your door. A hearing robot using visual communication signals of hearing dogs to communicate intent,” 2013 IEEE Symposium on Artificial Life (ALife), 2013.
-  A. Singh and J. E. Young, “Animal-inspired human-robot interaction: A robotic tail for communicating state,” 2012 7th ACM/IEEE Int. Conf. on Human-Robot Interaction (HRI), 2012.
-  S. Yohanan and K. E. MacLean, “The role of affective touch in human-robot interaction: Human intent and expectations in touching the haptic creature,” Int. J. of Social Robotics, Vol.4, No.2, pp. 163-180, 2012.
-  W. Moyle, C. Jones, B. Sung, M. Bramble, S. O’Dwyer, M. Blumenstein, and V. Estivill-Castro, “What Effect Does an Animal Robot Called CuDDler Have on the Engagement and Emotional Response of Older People with Dementia? A Pilot Feasibility Study,” Int. J. of Social Robotics, Vol.8, No.1, pp. 145-156, 2016.
-  O. Sugiyama, K. Itoyama, K. Nakada, and H. G. Okuno, “Sound annotation tool for multidirectional sounds based on spatial information extracted by HARK robot audition software,” 2014 IEEE Int. Conf. on Systems, Man, and Cybernetics (SMC), 2014.
-  M. Ohkita, Y. Bando, Y. Ikemiya, K. Itoyama, and K. Yoshii, “Audio-visual beat tracking based on a state-space model for a music robot dancing with humans,” 2015 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), 2015.
-  K. Nakadai, T. Mizumoto, and K. Nakamura, “Robot-Audition-based Human-Machine Interface for a Car,” 2015 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), 2015.
-  I. Nishimuta, N. Hirayama, K. Yoshii,K. Itoyama, and H. G. Okuno, “A robot quizmaster that can localize, separate, and recognize simultaneous utterances for a fastest-voice-first quiz game,” 2014 IEEE-RAS Int. Conf. on Humanoid Robots, 2014.
-  M. Otake, M. Nergui, S. Moon, K. Takagi, T. Kamashima, and K. Nakadai, “Development of a sound source localization system for assisting group conversation,” Int. Conf. on Intelligent Robotics and Applications, 2013.
-  R. Gomez, K. Nakamura, T. Mizumoto, and K. Nakadai, “Compensating changes in speaker position for improved voice-based human-robot communication,” 2015 IEEE-RAS 15th Int. Conf. on Humanoid Robots (Humanoids), 2015.
-  F. Asano, M. Goto, K. Itou, and H. Asoh, “Real-time sound source localization and separation system and its application to automatic speech recognition,” INTERSPEECH, 2001.
-  R. Schmidt, “Multiple emitter location and signal parameter estimation,” IEEE Trans. on Antenas and Propagation, Vol.34, No.3, pp. 276-280, 1986.
-  F. Asano, M. Goto, K. Itou, and H. Asoh, “Real-time Sound Source Localization and Separation System and Its Application to Automatic Speech Recognition,” Proc. of EUROSPEECH 2001, pp. 1013-1016, 2001.
-  H. Nakajima, K. Nakadai, and Y. Hasegawa, “Blind source separation with parameter-free adaptive step-size method for robot audition,” IEEE Trans. on audio, speech, and language processing, Vol.18, No.6, pp. 1476-1485, 2010.
-  H. G. Okuno, K. Nakadai, and H. Kim, “Robot audition: Missing feature theory approach and active audition,” Robotics research, pp. 227-244, 2011.
-  D. Patrick and J. Bonnal, “Information-theoretic detection of broadband sources in a coherent beamspace MUSIC scheme,” 2010 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), 2010.
-  A. Lindsey, “Emobie™: A robot companion for children with anxiety,” The Eleventh ACM/IEEE Int. Conf. on Human Robot Interation, 2016.
-  J. K. Westlund et al., “Tega: A social robot,” The Eleventh ACM/IEEE Int. Conf. on Human Robot Interation, 2016.
-  E. Kubinyi et al., “Social behaviour of dogs encountering AIBO, an animal-like robot in a neutral and in a feeding situation,” Behavioural processes, Vol.65, No.3, pp. 231-239, 2004.
-  M. Zhao and A. P. del Pobil, “Is a furry pet more engaging? comparing the effect of the material on the body surface of robot pets,” Social Robotics: 5th Int. Conf. (ICSR 2013), pp. 569-570, Bristol, UK, October 27-29, 2013.
-  S. Jeong et al., “Designing a socially assistive robot for pediatric care,” Proc. of the 14th Int. Conf. on Interaction Design and Children, 2015.
-  A. Lazar, H. J. Thompson, A. M. Piper, and G. Demiris, “Rethinking the design of robotic pets for older adults,” Proc. of the 2016 ACM Conf. on Designing Interactive Systems, 2016.
This article is published under a Creative Commons Attribution-NoDerivatives 4.0 International License.