single-rb.php

JRM Vol.20 No.5 pp. 731-738
doi: 10.20965/jrm.2008.p0731
(2008)

Paper:

Control of Human Generating Force by Use of Acoustic Information? Substituting Artificial Sounds for Onomatopoeic Utterances

Miki Iimura*, Taichi Sato*, and Kihachiro Tanaka**

*School of Engineering, Tokyo Denki University, 2-2 Kanda-Nishiki-cho, Chiyoda-ku, Tokyo 101-8457, Japan

**Faculty of Engineering, Saitama University, 255, Shimo-okubo, Sakura-ku, Saitama-shi, Saitama 338-8570, Japan

Received:
February 25, 2008
Accepted:
July 18, 2008
Published:
October 20, 2008
Keywords:
onomatopoeia, artificial sound, human sensitivity, lifting action, emotion
Abstract
We have conducted basic experiments for applying onomatopoeia to engineering problems. Subjects were made to conduct lifting while listening to acoustic information consisting of onomatopoeic utterances and artificial sounds. We demonstrated a relationship between acoustic information and lifting-forces exerted by subjects. Here, we replaced onomatopoeic utterances with artificial sounds related to onomatopoeic utterances “with or without emotion,” and show that onomatopoeic utterances “with emotion” can indeed be replaced by composite sounds, including sweep sounds with a high center frequency. We also show that onomatopoeic utterances “without emotion” and pure sound have the same effect on the magnitude of lifting-force.
Cite this article as:
M. Iimura, T. Sato, and K. Tanaka, “Control of Human Generating Force by Use of Acoustic Information? Substituting Artificial Sounds for Onomatopoeic Utterances,” J. Robot. Mechatron., Vol.20 No.5, pp. 731-738, 2008.
Data files:
References
  1. [1] T. Sato, K. Oyama, M. Iimura, H. Kobayashi, and K. Tanaka, “Control of human generating force by use of acoustic information –Utilization of onomatopoeic utterance,” JSME Int. Journal, Series C, Vol.49, No.3, pp. 687-694, 2006.
  2. [2] Y. Fujino, K. Inoue, M. Kikkawa, S. Horie, E. Nishina, T. Yamada, and Y. Sagisaka, “Actual condition survey of the usage of the sport onomatopoeias in Japanese,” Tokai J. of Sports Medical Science, No.17, pp. 28-38, 2005 (in Japanese).
  3. [3] K. Tohya, “Effects of verbalization of mimetic phoneme on amotormemory in infants,” J. of Education Psychology, Vol.40, No.2, pp. 148-156, 1992 (in Japanese).
  4. [4] K. Tohya, “Developmental changes in effect of verbalization strategy of onomatopoeia on the motor-memory,” J. of Education Psychology, Vol.40, No.4, pp. 436-444, 1992 (in Japanese).
  5. [5] K. Tohya, “The role of onomatopoeia in rehabilitation for the child with disability –A discussion based on the analysis of the training process in SHINRI-REHABILITATION–,” Bulletin of Joetsu University of Education (in Japanese), Vol.12, No.2, pp. 269-277, 1993.
  6. [6] S. Murata, “Encyclopedia of the sounds,” Maruzen-sha, pp. 46-47, 94, 178-181, 729-731, 799-803, 2006 (in Japanese).
  7. [7] I. Tamori, “Enjoy Onomatopoeia,” Iwanami-shoten, pp. 134-151, 2004 (in Japanese).
  8. [8] T. Toi, “Creation of Tone Quality for Automobile and Technical Trend of Comfortable Sound Design,” Journal of Society of Automotive Engineers of Japan, Vol.60, No.4, pp. 12-17, 2006 (in Japanese).
  9. [9] O. Kuroda and Y. Fujii, “Study of Improving Engine Sound Quality,” Journal of Society of Automotive Engineers of Japan, Vol.43, No.8, pp. 46-52, 1989 (in Japanese).
  10. [10] Y. Ishii and K. Noumura, “Evaluation Techniques and their Applications for Interior Sound Quality of Vehicle,” Journal of Society of Automotive Engineers of Japan, Vol.60, No.4, pp. 24-29, 2006 (in Japanese).
  11. [11] A. Matsumura, “Daijisen,” Shogaku-kan, p. 2029, 1998 (in Japanese).
  12. [12] T. Shirasawa, T. Yamamura, T. Tanaka, and N. Ohnishi, “Discriminating Emotions intended in Speech,” Technical report of IEICE HIP, Vol.96, No.499, pp. 79-84, 1997 (in Japanese).
  13. [13] J. Sato and S. Morishima, “Dimensional Analysis of Emotional Speech in terms of Semantic Differential Technique, Technical report of IEICE,” HCS 97-4, pp. 21-28, 1997 (in Japanese).
  14. [14] M. Shigenaga, “Features of Emotionally Uttered Speech Revealed by Discriminant Analysis,” The Transactions of the Institute of Electronics, Information and Communication Engineers. A, Vol.J83-A, No.6, pp. 726-735, 2000 (in Japanese).
  15. [15] H. Mitsumoto, M. Yanagida, H. Otawa, and S. Tamura, “Detection of Ironical Utterance Based on Acoustic Features, IEICE technical report. Speech,” Vol.96, No.160, pp. 17-24, 1996 (in Japanese).
  16. [16] T. Komatsu, “Can we assign attitudes to a computer based on its beep sounds? – Toward an effective method for making humans empathize with artificial agents,” IJCAI-05 Proc. pp. 1692-1693, 2005.
  17. [17] S. Inokuchi, “Kansei Information Processing,” The Journal of the Institute of Electronics, Information, and Communication Engineers , Vol.80, No.10, pp. 1007-1012, 1997 (in Japanese).
  18. [18] Y. Kitahara, “Kansei Information and Media Processing Technology,” The Journal of the Institute of Electronics, Information, and Communication Engineers, Vol.81, No.1, pp. 60-67, 1998 (in Japanese).
  19. [19] S. Sugano, “Possibility that robots have hearts,” Journal of the Society of Instrument and Control Engineers, Vol.34, No.4, pp. 320-323, 1995 (in Japanese).
  20. [20] H. Katayose and S. Inokuchi, “Virtual Performer,” Journal of the Robotics Society of Japan, Vol.14, No.2, pp. 208-211, 1996 (in Japanese).
  21. [21] T. Sato and K. Tanaka, “Sensitivity of Human for Sounds and Words, and Their Engineering Use,” Journal of the Robotics Society of Japan, Vol.24, No.6, pp. 716-719, 2006 (in Japanese).

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Oct. 11, 2024