JRM Vol.29 No.1 pp. 137-145
doi: 10.20965/jrm.2017.p0137


Wayang Robot with Gamelan Music Pattern Recognition

Tito Pradhono Tomo*, Alexander Schmitz*, Guillermo Enriquez**, Shuji Hashimoto**, and Shigeki Sugano*

*Department of Modern Mechanical Engineering, School of Creative Science and Engineering, Waseda University
3-4-1 Okubo, Shinjuku-ku, Tokyo 169-8555, Japan

**Department of Applied Physics, School of Advanced Science and Engineering, Waseda University
3-4-1 Okubo, Shinjuku-ku, Tokyo 169-8555, Japan

August 7, 2016
October 17, 2016
February 20, 2017
music information retrieval, machine learning, intelligent machine, wayang kulit, intangible culture

Wayang Robot with Gamelan Music Pattern Recognition

Wayang robot

This paper proposes a way to protect endangered wayang puppet theater, an intangible cultural heritage from Indonesia, by turning a robot into a puppeteer successor. We developed a seven degrees-of-freedom (DOF) manipulator to actuate the sticks attached to the wayang puppet body and hands. The robot can imitate 8 distinct human puppeteer’s manipulations. Furthermore, we developed a gamelan music pattern recognition, towards a robot that can perform based on the gamelan music. In the offline experiment, we extracted energy (time domain), spectral rolloff, 13 Mel-frequency cepstral coefficients (MFCCs), and the harmonic ratio from 5 s long clips, every 0.025 s, with a window length of 1 s, for a total of 2576 features. Two classifiers (3 layers feed-forward neural network (FNN) and multi-class Support Vector Machine (SVM)) were compared. The SVM classifier outperformed the FNN classifier with a recognition rate of 96.4% for identifying the three different gamelan music patterns.

  1. [1] R. Kurin, “Safeguarding Intangible Cultural Heritage in the 2003 UNESCO Convention : a critical appraisal,” Int. J. of Intangible heritage, Vol.2, pp. 10-20, 2007.
  2. [2] UNESCO Culture Sector, Intangible Heritage, 2003 Convention, 2003.
  3. [3] H. Spiller, “Gamelan: the traditional sounds of Indonesia,” ABC-Clio Inc., 2004.
  4. [4] T. P. Tomo, G. Enriquez, and S. Hashimoto, “Indonesian puppet theater robot with gamelan music emotion recognition,” 2015 IEEE Int. Conf. on Robotics and Biomimetics (IEEE-ROBIO 2015), pp. 1177-1182, 2016.
  5. [5] K. Nakadai, K. Hidai, H. G. Okuno, H. Mizoguchi, and H. Kitano, “Real-time Auditory and Visual Multiple-speaker Tracking For Human-robot Interaction,” J. of Robotics and Mechatronics, Vol.14, No.5, 2002.
  6. [6] S. Nakaoka, A. Nakazawa, F. Kanehiro, K. Kaneko, M. Morisawa, and K. Ikeuchi, “Task model of lower body motion for a biped humanoid robot to imitate human dances,” 2005 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), pp. 2769-2774, 2005.
  7. [7] I. Chen, “Marionette: From Traditional Manipulation To Robotic Manipulation,” Int. Symposium on History of Machines and Mechanisms, pp. 119-133, 2004.
  8. [8] D. Shin, M. Takahide, H. Shuji, and O. Sadamu, “Computer BUNRAKU System,” Robotics Society of Japan, Vol.11, No.7, pp. 1020-1027, 1993.
  9. [9] D. A. Ghani, “Wayang kulit: Digital puppetry character rigging using Maya MEL language,” 2011 4th Int. Conf. on Modeling, Simulation and Applied Optimization (ICMSAO 2011), 2011.
  10. [10] S. Adinandra, N. A. Adhilaga, and D. Erfawan, “WayBot: A low cost manipulator for playing Javanese puppet,” 2015 7th Int. Conf. on Information Technology and Electrical Engineering (ICITEE), pp. 376-381, Chiang Mai, Thailand, 2015.
  11. [11] W. C. Chiang, J. S. Wang, and Y. L. Hsu, “A Music Emotion Recognition Algorithm with Hierarchical SVM Based Classifiers,” 2014 Int. Symposium on Computer, Consumer and Control, pp. 1249-1252, 2014.
  12. [12] N. J. Nalini and S. Palanivel, “Emotion Recognition in Music Signal using AANN and SVM,” Int. J. of Computer Application, Vol.77, No.2, pp. 7-14, 2013.
  13. [13] Y. K. Suprapto, I. K. E. Purnama, M. Hariadi, M. H. Purnomo, and T. Usagawa, “Sound Modeling of Javanese Traditional Music Instrument,” Int. Conf. on Instrumentation, Communication, Information Technology, and Biomedical Engineering 2009, pp. 1-6, Bandung, 2009.
  14. [14] M. N. Latief, F. L. Gaol, and B. H. Iswanto, “Analysis and identification Gamelan Bonang sound spectrum,” Proc. of 2nd Int. Conf. on Computational Intelligence, Modelling and Simulation (CIMSim 2010), pp. 335-338, 2010.
  15. [15] M. H. W. Budhiantho and G. Dewantoro, “Homomorphic filtering for extracting Javanese Gong wave signals,” 2014 8th Int. Conf. on Telecommunication Systems Services and Applications (TSSA), pp. 1-6, Kuta, 2014.
  16. [16] R. Long, “The movement system in Javanese wayang kulit in relation to puppet character type: a study of Ngayogyakarta shadow theatre,” Ph.D. thesis, 1979.
  17. [17] C. Huang and B. Mutlu, “Modeling and evaluating narrative gestures for humanlike robots,” Robotics: Science and Systems., pp. 26-32, 2013.
  18. [18] T. Giannakopoulos and A. Pikrakis, “Introduction to Audio Analysis: A MATLAB® Approach,” Academic Press, 2014.
  19. [19] H. Hirsch and D. Pearce, “The Aurora Experimental Framework For The Performance Evaluation of Speech Recognition Systems Under Noisy Conditions,” Isca Itrw Asr2000, pp. 181-188, 2000.
  20. [20] G. Tzanetakis and P. Cook, “Musical genre classification of audio signals,” IEEE Trans. on Speech and Audio Processing, Vol.10, No.5, pp. 293-302, 2002.
  21. [21] T. Giannakopoulos and S. Petridis, “Unsupervised speaker clustering in a linear discriminant subspace,” Proc. of 9th Int. Conf. on Machine Learning and Applications, (ICMLA 2010), pp. 1005-1009, 2010.
  22. [22] D. P. Wulandari, Y. K. Suprapto, and M. H. Purnomo, “Gamelan music onset detection using Elman network,” Proc. of 2012 IEEE Int. Conf. on Computational Intelligence for Measurement Systems and Applications (CIMSA 2012), pp. 91-96, 2012.
  23. [23] H. G. Kim, N. Moreau, and T. Sikora, “MPEG-7 Audio and Beyond: Audio Content Indexing and Retrieval,” 2006.
  24. [24] S. Theodoridis and K. Koutroumbas, “Pattern Recognition, Fourth Edition,” Vol.11. 2008.
  25. [25] B. Matityaho and M. Furst, “Neural network based model for classification of music type,” Proc. Eighteenth Convention of Electrical and Electronics Engineers in Israel, pp. 4.3.4/1-4.3.4/5, 1995.
  26. [26] S. Gaikwad, A. V. Chitre, and Y. H. Dandawate, “Classification of Indian classical instruments using spectral and principal component analysis based cepstrum features,” Proc. of Int. Conf. on Electronic Systems, Signal Processing, and Computing Technologies (ICESC 2014), pp. 276-279, 2014.
  27. [27] A. Tjahyanto, D. P. Wulandari, Y. K. Suprapto, and M. H. Purnomo, “Gamelan instrument sound recognition using spectral and facial features of the first harmonic frequency,” Acoustical Science and Technology, Vol.36, No.1, pp. 12-23, 2015.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, IE9,10,11, Opera.

Last updated on Sep. 21, 2017