single-jc.php

JACIII Vol.17 No.2 pp. 227-236
doi: 10.20965/jaciii.2013.p0227
(2013)

Paper:

Editing Robot Motion Using Phonemic Feature of Onomatopoeias

Junki Ito*1, Masayoshi Kanoh*2, Tsuyoshi Nakamura*3,
and Takanori Komatsu*4

*1Graduate School of Computer and Cognitive Sciences, Chukyo University, 101 Tokodachi, Kaizu-cho, Toyota, Aichi 470-0393, Japan

*2School of Information Science and Technology, Chukyo University, 101 Tokodachi, Kaizu-cho, Toyota, Aichi 470-0393, Japan

*3Graduate School of Engineering, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya, Aichi 466-8555, Japan

*4Faculty of Textile Science and Technology, Shinshu University, 3-15-1 Tokida, Ueda, Nagano 386-8567, Japan

Received:
November 15, 2012
Accepted:
January 30, 2013
Published:
March 20, 2013
Keywords:
onomatopoeia, P-type Fourier descriptor, auto-associative neural network
Abstract
Onomatopoeias are words that represent the sound, appearance, or voice of things, thus making it possible to create expressions that bring a scene to life in a subtle fashion. Onomatopoeias can be used to make the process of robot motion generation more easily and intuitively. In previous studies, subjective quantified values of onomatopoeias have been used as indices of robot motion, but the generality of the motion has not been evaluated. In this study, we propose a method for generating robot motion using the objective quantified values of onomatopoeias. We experimentally verified that the proposed method generatedmore suitable motion than did the previous methods.
Cite this article as:
J. Ito, M. Kanoh, T. Nakamura, and T. Komatsu, “Editing Robot Motion Using Phonemic Feature of Onomatopoeias,” J. Adv. Comput. Intell. Intell. Inform., Vol.17 No.2, pp. 227-236, 2013.
Data files:
References
  1. [1] J. Ito, R. Arisawa, M. Kanoh, T. Nakamura, and T. Komatsu, “Operation Generation of the Intuitive Robot by the Operation Plane Using the Neural Network,” Annual Conf. of the Japanese Society for Artificial Intelligence, 2012 (in Japanese).
  2. [2] T. Komatsu and H. Akiyama, “Expression System of Onomatopoeias for Assisting Users’ Intuitive Expressions,” The Trans. of the Institute of Electronics, Information and Communication Engineers, Vol.J92-A, No.11, pp. 752-763, 2009 (in Japanese).
  3. [3] T. Komatsu, “Quantifying Japanese Onomatopoeias: Toward Augmenting Creative Activities with Onomatopoeias,” Augmented Human Int. Conf., 2012.
  4. [4] K. Kanbara and K. Tsukada, “Onomatopen: Painting Using Onomatopoeia,” Int. Conf. on Entertainment Computing, pp. 43-54, 2010.
  5. [5] H. Terashima and T. Komatsu, “MOYA-MOYA Drawing, Development of A Drawing Tool That Can Utilize Users’ Expressed Onomatopoeias as An Drawing Effect,” Annual Conf. of the Japanese Society for Artificial Intelligence, 2012 (in Japanese).
  6. [6] Y. Ueda, Y. Shimizu, and M. Sakamoto, “System Construction Supporting Medical Interviews with Foreign Doctors Using Onomatopoeia Expressing Pains,” Int. Workshop on Modern Science and Technology, pp. 132-136, 2012.
  7. [7] K. Lertsumruaypun, C. Watanabe, and S. Nakamura, “Onomatoperori: Recipe Recommendation System Using Onomatopoeic Words,” IEIEC 2nd technical committee document, 2010.
  8. [8] Y. Uesaka, “Spectral Analysis and Complexity of Form,” Symposium on Applied Functional Analysis, pp. 18-29, 1985.
  9. [9] Y. Uesaka, “Spectral Analysis of Form Based on Fourier Descriptors,” Int. Symposium for Science on Form, pp. 405-412, 1986.
  10. [10] M. Kanoh, S. Kato, and H. Itoh, “Efficient Joint Detection Considering Complexity of Contours,” Lecture Notes in Artificial Intelligence, Vol.1886, pp. 588-598, 2000.
  11. [11] C. M. Bishop, “Neural Networks for Pattern Recognition,” Oxford University Press, 1995.
  12. [12] F. Kawakami, S. Morishima, H. Yamada, and H. Harashima, “Construction and Psychological Evaluation of 3-D Emotion Space,” J. of Biomedical Fuzzy Systems Association Biomedical Fuzzy and Human Sciences, Vol.1, No.1, pp. 33-42, 1995.
  13. [13] M. Kanoh, S. Iwata, S. Kato, and H. Itoh, “Emotive Facial Expressions of Sensitivity Communication Robot “Ifbot”,” Kansei Engineering Int., Vol.5, No.3, pp. 35-42, 2005.
  14. [14] M. Kanoh, T. Nakamura, S. Kato, and H. Itoh, “Affective Facial Expressions Using Auto-associative Neural Network in Kansei Robot “Ifbot”,” Y. Dai et al. (Eds.), Kansei Engineering and Soft Computing, Theory and Practice, pp. 215-236, IGI Global, 2010.
  15. [15] N. Ueki, S. Morishima, H. Yamada, and H. Harashima, “Expression Analysis/Synthesis System Based on Emotion Space Constructed by Multilayered Neural Network,” Systems and Computers in Japan, Vol.25, No.13, pp. 95-107, 1995.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 19, 2024