JACIII Vol.16 No.2 pp. 313-326
doi: 10.20965/jaciii.2012.p0313


Impacts of Multimodal Feedback on Efficiency of Proactive Information Retrieval from Task-Related HRI

Barbara Gonsior*, Christian Landsiedel*,
Nicole Mirnig**, Stefan Sosnowski*, Ewald Strasser**,
Jakub Złotowski**, Martin Buss*, KoljaKühnlenz*,
Manfred Tscheligi**, Astrid Weiss**, and Dirk Wollherr*

*Institute of Automatic Control Engineering (LSR), Technische Universität München, D-80290 Munich, Germany

**ICT&S Center, University of Salzburg, Sigmund-Haffner-Gasse 18, 5020 Salzburg, Austria

September 15, 2011
November 15, 2011
March 20, 2012
robotics, human-robot interaction, emotions, facial expressions, usability
This work is a first step towards an integration ofmultimodality with the aim to make efficient use of both human-like, and non-human-like feedback modalities in order to optimize proactive information retrieval from task-related Human-Robot Interaction (HRI) in human environments. The presented approach combines the human-like modalities speech and emotional facial mimicry with non-human-like modalities. The proposed non-human-like modalities are a screen displaying retrieved knowledge of the robot to the human and a pointer mounted above the robot head for pointing directions and referring to objects in shared visual space as an equivalent for arm and hand gestures. Initially, pre-interaction feedback is explored in an experiment investigating different approach behaviors in order to find socially acceptable trajectories to increase the success of interactions and thus efficiency of information retrieval. Secondly, pre-evaluated humanlike modalities are introduced. First results of a multimodal feedback study are presented in the context of the IURO project,1 where a robot asks for its way to a predefined goal location.
1. Interactive Urban Robot,
Cite this article as:
B. Gonsior, C. Landsiedel, N. Mirnig, S. Sosnowski, E. Strasser, J. Złotowski, M. Buss, KoljaKühnlenz, M. Tscheligi, A. Weiss, and D. Wollherr, “Impacts of Multimodal Feedback on Efficiency of Proactive Information Retrieval from Task-Related HRI,” J. Adv. Comput. Intell. Intell. Inform., Vol.16 No.2, pp. 313-326, 2012.
Data files:
  1. [1] I. Nourbakhsh, C. Kunz, and T.Willeke, “The mobot museum robot installations: A five year experiment,” In IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, 2003.
  2. [2] S. Thrun, M. Bennewitz, W. Burgard, A. Cremers, F. Dellaert, D. Fox, D. Hahnel, C. Rosenberg, N. Roy, J. Schulte et al., “MINERVA: A second-generation museum tour-guide robot,” In IEEE Int. Conf. on Robotics and Automation, pp. 1999-2005, 1999.
  3. [3] H. Gross, H. Bühme, C. Schröter, S. Müller, A. König, C. Martin, M. Merten, and A. Bley, “Shopbot: Progress in developing an interactive mobile shopping assistant for everyday use,” In IEEE Int. Conf. on Systems, Man and Cybernetics, 2008.
  4. [4] A. Sanfeliu and J. Andrade-Cetto, “Ubiquitous networking robotics in urban settings,” In IEEE/RSJ IROS Workshop on Network Robot Systems, pp. 14-18, 2006.
  5. [5] G. Ferri, A. Mondini, A. Manzi, B. Mazzolai, C. Laschi, V. Mattoli, M. Reggente, T. Stoyanov, A. J. Lilienthal, M. Lettere, and P. Dario, “DustCart, a Mobile Robot for Urban Environments: Experiments of Pollution Monitoring and Mapping during Autonomous Navigation in Urban Scenarios,” In ICRA Workshop on Networked and Mobile Robot Olfaction in Natural, Dynamic Environments, 2010.
  6. [6] B. Dumas, D. Lalanne, and S. Oviatt, “Multimodal interfaces: a survey of principles, models and frameworks,” Human Machine Interaction, pp. 3-26, 2009.
  7. [7] R. Stiefelhagen, C. Fugen, R. Gieselmann, H. Holzapfel, K. Nickel, and A. Waibel, “Natural human-robot interaction using speech, head pose and gestures,” In IEEE/RSJ Int. Conf. Intelligent Robots and Systems, Vol.3, pp. 2422-2427, 2004.
  8. [8] N. Mirnig, A. Weiss, and M. Tscheligi, “A communication structure for human-robot itinerary requests,” In Int. Conf. on Human-Robot Interaction, 2011.
  9. [9] N. Mirnig, S. Riegler, A. Weiss, and M. Tscheligi, “A Case Study on the Effect of Feedback on Itinerary Requests in Human-Robot Interaction,” In IEEE Int. Symp. on Robot and Human Interactive Communication, pp. 343-349, 2011.
  10. [10] M. Salem, K. Rohlfing, S. Kopp, and F. Joublin, “A Friendly Gesture: Investigating the Effect of Multimodal Robot Behavior in Human-Robot Interaction,” In IEEE Int. Symp. on Robot and Human Interactive Communication, pp. 247-252, 2011.
  11. [11] A. Muhammad, S. Alili, M. Warnier, and R. Alami, “An Architecture Supporting Proactive Robot Companion Behavior,” In AISB Convention, 2009.
  12. [12] A. Schmid, O. Weede, and H. Worn, “Proactive robot task selection given a human intention estimate,” In IEEE Int. Symp. on Robot and Human interactive Communication, pp. 726-731, 2007.
  13. [13] O. Schrempf, U. Hanebeck, A. Schmid, and H. Worn, “A novel approach to proactive human-robot cooperation,” In IEEE Int. WS Robot & Human Interactive Communication, pp. 555-560, 2005.
  14. [14] A. Cesta, G. Cortellessa, V. Giuliani, F. Pecora, R. Rasconi, M. Scopelliti, and L. Tiberio, “Proactive assistive technology: An empirical study,” Human-Computer Interact., pp. 255-268, 2007.
  15. [15] R. Müller, T. Röfer, A. Lankenau, R. Musto, K. Stein, and A. Eisenkolb, “Coarse Qualitative Descriptions in Robot Navigation,” In Spatial Cognition II, pp. 265-276, 2000.
  16. [16] H. Asoh, Y. Motomura, F. Asano, I. Hara, S. Hayamizu, N. Vlassis, and B. Kröse, “Jijo-2: An Office Robot that Communicates and Learns,” Intelligent Systems, Vol.19, No.5, pp. 46-55, 2001.
  17. [17] M. Michalowski, S. Sabanovic, C. DiSalvo, D. B. Font, L. Hiatt, N. Melchior, and R. Simmons, “Socially Distributed Perception: GRACE plays Social Tag at AAAI 2005,” Autonomous Robots, Vol.22, No.4, pp. 385-397, 2007.
  18. [18] S. Lauria, G. Bugmann, T. Kyriacou, and E. Klein, “Instruction Based Learning: How to Instruct a Personal Robot to Find HAL,” In European Workshop on Learning Robots, 2001.
  19. [19] G. Skantze, “Error Handling in Spoken Dialogue Systems – Managing Uncertainty, Grounding and Miscommunication,” Doctoral thesis, KTH Computer Science and Communication, 2007.
  20. [20] T. Tenbrink, S. Hui, and K. Fischer, “Route instruction dialogues with a robotic wheelchair,” InWorkshop on the Semantics and Pragmatics of Dialogue, p. 163, 2006.
  21. [21] M. Buss, D. Carton, B. Gonsior, K. Kühnlenz, C. Landsiedel, N. Mitsou, R. d. Nijs, J. Zlotowski, S. Sosnowski, E. Strasser, M. Tscheligi, A. Weiss, and D. Wollherr, “Towards ProactiveHuman-Robot Interaction in Human Environments,” In Int. Conf. on Cognitive Infocommunications, 2011.
  22. [22] E. Schegloff, “On the organization of sequences as a source of ”coherence” in talk-in-interaction,” Conversational organization and its development, Vol.38, pp. 51-77, 1990.
  23. [23] K. Dautenhahn, M. Walters, S. Woods, K. Koay, C. Nehaniv, A. Sisbot, R. Alami, and T. Siméon, “How may I serve you?: a robot companion approaching a seated person in a helping context,” In ACM Conf. on Human-Robot Interaction, pp. 172-179, 2006.
  24. [24] M. L. Walters, K. Dautenhahn, S. N. Woods, and K. L. Koay, “Robotic etiquette: results from user studies involving a fetch and carry task,” In ACM/IEEE Int. Conf. on Human-Robot Interaction, pp. 317-324, 2007.
  25. [25] K. Koay, E. Sisbot, D. Syrdal, M. Walters, K. Dautenhahn, and R. Alami, “Exploratory study of a robot approaching a person in the context of handing over an object,” In AAAI-SS Multi-disciplinary Collaboration for Socially Assistive Robotics, pp. 18-24, 2007.
  26. [26] S. Satake, T. Kanda, D. F. Glas, M. Imai, H. Ishiguro, and N. Hagita, “How to approach humans?: strategies for social robots to initiate interaction,” In ACM/IEEE Human-Rob. Interact., pp. 109-116, 2009.
  27. [27] E. Hall, “Proxemics,” Current anthropology: a world journal of the sciences of man, 1968.
  28. [28] E. Hall, “The hidden dimension,” Vol.6, Doubleday NY, 1966.
  29. [29] M. Walters, K. Dautenhahn, S. Woods, K. Koay, R. T. Boekhorst, and D. Lee, “Exploratory studies on social spaces between humans and a mechanical-looking robot,” Connection Sc., Vol.18, No.4, p. 429, 2006.
  30. [30] A. Weiss, J. Igelsböck, M. Tscheligi, A. Bauer, K. Kühnlenz, D. Wollherr, and M. Buss, “Robots asking for directions: the willingness of passers-by to support robots,” In ACM/IEEE Int. Conf. on Human-Robot Interaction, pp. 23-30, 2010.
  31. [31] S. Woods, M. Walters, K. Koay, and K. Dautenhahn, “Comparing human robot interaction scenarios using live and video based methods: towards a novel methodological approach,” In IEEE Int.Workshop on Advanced Motion Control, pp. 750-755, 2006.
  32. [32] S. Woods, M. Walters, K. Koay, and K. Dautenhahn, “Methodological issues in HRI: A comparison of live and video-based methods in robot to human approach direction trials,” In IEEE Int. Symp. on Robot and Human Interactive Communication, pp. 51-58, 2006.
  33. [33] B. K. Payne, C. M. Cheng, O. Govorun, and B. D. Stewart, “An inkblot for attitudes: Affect misattribution as implicit measurement,” J. Personality and Social Psych., Vol.89, No.3, pp. 277-293, 2005.
  34. [34] J. Złotowski, A. Weiss, and M. Tscheligi, “Navigating in public space: Participants evaluation of a robots approach behavior,” In ACM/IEEE Int. Conf. on Human-Robot Interaction, 2012.
  35. [35] V. Kulyukin, “On natural language dialogue with assistive robots,” In ACM Conf. on Human-Robot Interaction, pp. 164-171, 2006.
  36. [36] B. Gonsior, D. Wollherr, and M. Buss, “Towards a Dialog Strategy for Handling Miscommunication in Human-Robot Dialog,” In IEEE Int. Symp. Robot & Human Interactive Comm., pp. 284-289, 2010.
  37. [37] B. Gonsior, C. Landsiedel, A. Glaser, D. Wollherr, and M. Buss, “Dialog Strategies for Handling Miscommunication in Task-Related HRI,” In IEEE ISymp. Robot& Human Int. Comm., 2011.
  38. [38] H. Shi and T. Tenbrink, “Telling Rolland where to go: HRI dialogues on route navigation,” In WS on Spatial Language and Dialogue, pp. 23-25, 2005.
  39. [39] J. Chu-Carroll and B. Carpenter, “Vector-based natural language call routing,” Computational Linguistics, Vol.25, No.3 pp. 361-388, 1999.
  40. [40] A. L. Gorin, G. Riccardi, and J. Wright, “How May I Help You?,” Speech Communication, Vol.23, pp. 113-127, 1997.
  41. [41] J. Boye and M. Wirén, “Multi-slot semantics for natural-language call routing systems,” InWorkshop on Bridging the Gap: Academic and Industrial Research in Dialog Technologies, pp. 68-75, 2007.
  42. [42] D. Wunderlich, “Wie analysiert man Gespräche? Beispiel: Wegauskünfte,” Linguistische Berichte, Vol.53, pp. 41-76, 1978.
  43. [43] M. Gabsdil, “Clarification in spoken dialogue systems,” In AAAI Symp. Workshop on Natural Language Generation in Spoken and Written Dialogue, 2003.
  44. [44] G. Hirst, S. McRoy, P. Heeman, P. Edmonds, and D. Horton, “Repairing conversational misunderstandings and non-understandings,” Speech Communication, Vol.15, No.3-4, pp. 213-229, 1994.
  45. [45] J. F. Kelley, “An empirical methodology for writing user-friendly natural language computer applications,” In ACM SIG-CHI 83 Human Factors in Computing systems, Vol.193-196, 1983.
  46. [46] S. Werner, B. Krieg-Brückner, and T. Herrmann, “Modeling Navigational Knowledge by Route Graphs,” In Spatial cognition II, pp. 295-316, 2000.
  47. [47] W. F. P. Ekman, “Felt, False and Miserable Smiles,” J. of Nonverbal Behavior, Vol.6, pp. 238-252, 1982.
  48. [48] S. Sosnowski, A. Bittermann, K. Kühnlenz, and M. Buss, “Design and Evaluation of Emotion-Display EDDIE,” In Int. Conf. on Intelligent Robots and Systems, 2006.
  49. [49] C. Mayer, S. Sosnowski, B. Radig, and K. Kühnlenz, “Towards robotic facial mimicry: system development and evaluation,” In Int Symp. on Robot and Human Interactive Communication, 2010.
  50. [50] A. A. Grandey, G. M. Fisk, A. S. Mattila, K. J. Jansen, and L. A. Sideman, “Is service with a smile enough? Authenticity of positive displays during service encounters,” Organizational Behavior and Human Decision Processes, Vol.96, No.1, pp. 38-55, 2005.
  51. [51] N. Bischof, “Untersuchungen zur Systemanalyse der Sozialen Motivation IV: Die Spielarten des Lächelns und das Problem der motivationalen Sollwertanpassung,” Zeitsch. Psychol., Vol.204, pp.1-40, 1996.
  52. [52] I. Borutta, S. Sosnowski, K. Kühnlenz, M. Zehetleitner, and N. Bischof, “Generating Artificial Smile Variations Based on a Psychological System-Theoretic Approach,” In IEEE Int. Symp. on Robot and Human Interactive Communication, 2009.
  53. [53] B. Gonsior, S. Sosnowski, C. Mayer, J. Blume, B. Radig, D. Wollherr, and K. Kühnlenz, “Improving Aspects of Empathy and Subjective Performance for HRI through Mirroring Facial Expressions,” In IEEE ISymp. Robot & Human Interactive Comm., pp. 60-356, 2011.
  54. [54] M. Heerink, B. Krose, V. Evers, and B. Wielinga, “Measuring acceptance of an assistive social robot: a suggested toolkit,” In IEEE ISymp. Robot & Human Interactive Comm., pp. 528-533, 2009.
  55. [55] C. Bartneck, D. Kulic, and E. Croft, “Measuring the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots,” In Workshop on Metrics for Human-Robot Interaction, pp. 37-44, 2008.
  56. [56] A. Bauer, B. Gonsior, D. Wollherr, and M. Buss, “Dialog System for Human-Robot Interaction for Asking for Directions,” Computational Linguistics, 2010 (submitted).
  57. [57] M. Goebl and G. Färber, “A real-time-capable hard- and software architecture for joint image and knowledge processing in cognitive automobiles,” Intelligent Vehicles Symp., pp. 737-740, 2007.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Jul. 19, 2024