Daisuke Katagami, Ken Ogawa, and Katsumi Nitta
We suggest adaptive gestures for robots adapting to groups based on utterance situations and social position and study the differences in type and number of teaching gestures based on the difference in groups in equivalence utterance. Gesture data is collected in which group members are directly taught to a robot through utterance situations and social position to make adaptation rule sets. Such rule sets are built by gestures to work in the group and group utterances to become common knowledge in the group. The degree of gesture adaptation is higher in a group when gestures are generated from a new text as input, confirming goodness of fit for the rule sets of individual groups through experiments.
Keywords: group adaptation, social position, adaptive gesture