single-jc.php

JACIII Vol.16 No.2 pp. 256-265
doi: 10.20965/jaciii.2012.p0256
(2012)

Paper:

Evaluation of Operetta Songs Generation System Based on Impressions of Story Scenes

Kenkichi Ishizuka* and Takehisa Onisawa**

*Graduate School of Systems and Information Engineering, University of Tsukuba, 1-1-1 Tennodai, Tsukuba 305-8573, Japan

**Faculty of Engineering, Information and Systems, University of Tsukuba, 1-1-1 Tennodai, Tsukuba 305-8573, Japan

Received:
September 1, 2011
Accepted:
December 23, 2011
Published:
March 20, 2012
Keywords:
Kansei information, multimedia, music, story
Abstract
This paper describes a system which composes operetta songs fitting to adjectives representing producer’s impressions of story scenes. Inputs to the system are original theme music, story texts and adjectives representing producer’s impressions of story scenes. The system composes variations on theme music and lyrics based on impressions of story scenes using Kansei information processing in order to convey producer’s impression of a story to audiences. Evolutionary computation is also applied to generations of variations and lyrics. Subjects experiments are performed to verify the usefulness of the system using The Ant and the Chrysalis in Aesop’s Fables as a story. In the experiments, two types of evaluations are considered. The one is the evaluation from the viewpoint that the system generates operetta songs fitting to story scenes appropriately or not. The other is the evaluation from the viewpoint that the system generates operetta songs giving producers and listeners the same impressions of generated operetta songs or not.
Cite this article as:
K. Ishizuka and T. Onisawa, “Evaluation of Operetta Songs Generation System Based on Impressions of Story Scenes,” J. Adv. Comput. Intell. Intell. Inform., Vol.16 No.2, pp. 256-265, 2012.
Data files:
References
  1. [1] A. J. Cohen, “How music influences the interpretation of film and Video: Approaches from experimental psychology,” Perspectives in Systematic Musicology, Vol.12, pp. 15-36, 2005.
  2. [2] H. Liu, “A Model of Textual Affect Sensing Using Real World Knowledge,” Proc. of The 2003 Int. Conf. on Intelligent User Interfaces, pp. 125-132, 2003.
  3. [3] C. Te. Li, H. C. Lai, C. T. Ho, C. L. Tseng, S. D. Lin, “Pusic: Musicalize Microblog Messages for Summarization and Exploration,” Proc. of the 2010 Int. Conf. on the World Wide Web 2010, Raleigh, North Carolina, USA , pp. 1141-1142, April, 2010.
  4. [4] K. Ishizuka and T. Onisawa, “Operetta Songs Generation System Based on Impressions of Story Scenes,” Proc. of Joint 5th Int. Conf. on Soft Computing and Intelligent Systems and 11th Int. Symposium on Advanced Intelligent Systems, pp. 831-836, 2010.
  5. [5] S. Fukayama, K. Nakatsuma, S. Sako, T. Nishimoto, and S. Sagayama, “Automatic Song Composition from the Lyrics exploiting Prosody of the Japanese Language,” Proc. of The Sound and Music Computing Conferences, pp. 299-302, Jul., 2010
  6. [6] B. Settles, “Computational Creativity Tools for Songwriters,” In Proc. of the NAACL-HLTWorkshop on Computational Approaches to Linguistic Creativity, pp. 49-57, ACL Press, 2010.
  7. [7] S. Kato and T. Onisawa, “Generation of Interesting Story from Picture Information,” J. of Advanced Computational Intelligence and Intelligent Informatics, Vol.11, No.7, pp. 759-766, 2007.
  8. [8] M. Munetomo, “Genetic Algorithm,” Morikita Publishing Co., 2008 (in Japanese).
  9. [9] T. Ikezoe, Y. Kazikawa, and Y. Nomura, “Music database retrieval system with sensitivity words using music sensitivity space,” J. of Japan Information Processing Society, Vol.42, No.12, pp. 3201-3212, 2001 (in Japanese).
  10. [10] K. Kitamura, “Creation of operetta for infants,” Jugei-Shobo, 1988 (in Japanese).
  11. [11] M. Hamanaka, K. Hirata, and S. Tojo, “ATTA: Automatic Timespan Tree Analyzer based on Extended GTTM,” Proc. of the 6th Int. Conf. on Music Information Retrieval conference (ISMIR2005), pp. 358-365, September, 2005.
  12. [12] W. Apel, “Harvard Dictionary of Music: Second Edition,” Harvard University Press, 1973.
  13. [13] H. Harada, “Primer of Orchestration,” Ongakuno-Tomo, 2002 (in Japanese).
  14. [14] I. Chung and A. Okada, “On the Reading Comprehension for a Modern Poem by Hearing Impaired Children: Through the Analysis ofMiscue,” Retelling and Construction of Imagination,” Bulletin of defectology, Vol.16, pp. 33-43, 1992 (in Japanese).
  15. [15] H. Homma, T. Nakanishi, and T. Kitagawa, “A Method of Automatic Metadata Extraction Corresponding to the Impression by Sound of the Word,” Information Modelling and Knowledge Bases (IOS Press), Vol.18, pp. 206-222, 2007.
  16. [16] S. Maekawa and M.Yoshida, “Untersuchung und Auswertung von Horer-Erlebnissen bei der Rezeption von Volkstumlichen Japanischen Liedern anhand der SD-Methode,” The bulletin of college of international studies, Chubu University, Vol.23, pp. 93-108, 1999 (in Japanese).
  17. [17] R. Morita and T. Onisawa, “Phrase Animation Generation Reflecting Impression of Words,” Proc. of the Int. Conf. on Advances in Computer Entertainment Technology, pp. 111-114, 2008.
  18. [18] T. Kumamoto and K. Tanaka, “Proposal of Impression Mining from News Articles,” Lecture Notes in Artificial Intelligence, LNAI3681, Springer, pp. 901-910, In Int. Conf. on Knowledge-Based Intelligent Information and Engineering Systems (KES ’05), Melbourne, Australia, September 2005.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 22, 2024