single-jc.php

JACIII Vol.16 No.2 pp. 227-238
doi: 10.20965/jaciii.2012.p0227
(2012)

Paper:

Musical Expression Generation Reflecting User’s Impression by Kansei Space and Fuzzy Rules

Mio Suzuki and Takehisa Onisawa

Graduate School of Systems and Information Engineering, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8573, Japan

Received:
September 7, 2011
Accepted:
December 8, 2011
Published:
March 20, 2012
Keywords:
musical expression, impression, adjective, Kansei space, fuzzy inference
Abstract
This paper proposes a generation method of musical expression based on a performer’s impression. The musical expression generation method consists of two procedures: image estimation and derivation of parameter values ofmusical expressions. In the image estimation procedure, an adjective, i.e., an image word, is mapped into the Kansei space. In the parameter values derivation procedure, parameter values of musical expression, tempo, volume and length of a note, are obtained by mapping from the Kansei space to the parameters’ space by fuzzy inference. The validity of the proposed method and the influence of music genres on musical expression generation are confirmed by subject experiments. From the experimental results it is found that the proposedmethod successfully generates musical expression reflecting impressions and musical expression in several genres.
Cite this article as:
M. Suzuki and T. Onisawa, “Musical Expression Generation Reflecting User’s Impression by Kansei Space and Fuzzy Rules,” J. Adv. Comput. Intell. Intell. Inform., Vol.16 No.2, pp. 227-238, 2012.
Data files:
References
  1. [1] H. Hoshina, “The Approach to Practical Musical Representation: Performance Interpretation Method Based on Energy Thinking,” Ongaku No Tomo Sha, Japan, 1998.
  2. [2] K. Teramura, H. Okuma, Y. Taniguchi, S.Makimoto, and S.Maeda, “Gaussian Process Regression for Rendering Music Performance,” In Proc. Int. Conf. of Music Perception and Cognition, pp. 167-172, 2008.
  3. [3] K. Hirata and R. Hiraga, “Ha-Hi-Hun: Performance Rendering System of High Controllability,” In Proc. the ICAD Workshop on Performance Rendering Systems, pp. 40-46, 2002.
  4. [4] M. Hashida, N. Nagata, and H. Katayose, “Pop-E: A performance rendering system for the ensemble music that considered group expression,” In Proc. Int. Conf. of Music Perception and Cognition, pp. 526-534, 2006.
  5. [5] S. Flossmann, M. Grachten, and G. Widmer, “Expressive Performance Rendering: Introducing Performance Context,” In Proc. 6th Sound and Music Computing Conf., pp. 155-160, 2009.
  6. [6] H. P. Schmitz, “Singen und Spielen: Versuch einer allgemeinen Musizierkunde,” S. Imoto and K. Takii Symphonia (Trans.), 1977.
  7. [7] “Association of Musical Electronics Industry.”
    http://www.amei.or.jp/
  8. [8] C. E. Osgood, G. J. Suci, and P. H. Tannenbaum, “The measurement of meaning,” University of Illinois Press, 1957.
  9. [9] K. Yamawaki and H. Shiizuka, “Recognition of Music Characteristic with Rough Sets,” J. of Japan Society of Kansei Engineering, Vol.7, No.2, pp. 283-288, 2007.
  10. [10] R. Morita and T. Onisawa, “Phrase Animation Generation Using Impression of Adjective/Adjectival Verb,” J. of Japan Society for Fuzzy Theory and System, Vol.22, No.1, pp. 121-134, 2010.
  11. [11] K. Shimizu and M. Hagiwara, “Image Estimation of Words Based on Adjective Co-occurrences,” The IEICE Trans. on information and systems, Vol.89, No.11, pp. 2483-2490, 2006.
  12. [12] R. Kruse, J. Gebhardt, and F. Klawonn, “Foundations of FUZZY SYSTEMS,” JOHN WILEY & SONS, England, 1994.
  13. [13] Y. Washio, “Suitei To Toukei,” Kyoritsu Shuppan, 1979.
  14. [14] M. Goto, H. Hashiguchi, T. Nishimura, and R. Oka, “RWC Music Database: Music Genre Database and Musical Instrument Sound Database,” In Proc. the 4th Int. Conf. on Music Information Retrieval, pp. 229-230, 2003.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Oct. 01, 2024