single-au.php

IJAT Vol.13 No.6 pp. 803-809
doi: 10.20965/ijat.2019.p0803
(2019)

Paper:

Recognition of Transient Environmental Sounds Based on Temporal and Frequency Features

Shota Okubo, Zhihao Gong, Kento Fujita, and Ken Sasaki

The University of Tokyo
5-1-5 Kashiwanoha, Kashiwa, Chiba 277-8563, Japan

Corresponding author

Received:
June 23, 2019
Accepted:
September 4, 2019
Published:
November 5, 2019
Keywords:
environmental sound recognition, transient sound, spectrogram, acoustic feature
Abstract

Environmental sound recognition (ESR) refers to the recognition of all sounds other than the human voice or musical sounds. Typical ESR methods utilize spectral information and variation within it with respect to time. However, in the case of transient sounds, spectral information is insufficient because only an average quantity of a given signal within a time period can be recognized. In this study, the waveform of sound signals and their spectrum were analyzed visually to extract temporal characteristics of the sound more directly. Based on the observations, features such as the initial rise time, duration, and smoothness of the sound signal; the distribution and smoothness of the spectrum; the clarity of the sustaining sound components; and the number and interval of collisions in chattering were proposed. Experimental feature values were obtained for eight transient environmental sounds, and the distributions of the values were evaluated. A recognition experiment was conducted on 11 transient sounds. The Mel-frequency cepstral coefficient (MFCC) was selected as reference. A support vector machine was adopted as the classification algorithm. The recognition rates obtained from the MFCC were below 50% for five of the 11 sounds, and the overall recognition rate was 69%. In contrast, the recognition rates obtained using the proposed features were above 50% for all sounds, and the overall rate was 86%.

Cite this article as:
S. Okubo, Z. Gong, K. Fujita, and K. Sasaki, “Recognition of Transient Environmental Sounds Based on Temporal and Frequency Features,” Int. J. Automation Technol., Vol.13 No.6, pp. 803-809, 2019.
Data files:
References
  1. [1] A. Caggiano and L. Nele, “Artificial Neural Networks for Tool Wear Prediction Based on Sensor Fusion Monitoring of CFRP/CFRP Stack Drilling,” Int. J. Automation Technol., Vol.12, No.3, pp. 275-281, 2018.
  2. [2] T. Zhang, Z. M. Zeng, Y. B. Li, W. K. Wang, and X. Bian, “Characteristics Analysis of Vacuum Gas Leak Detection Signals Based on Acoustic Emission,” Int. J. Automation Technol., Vol.8, No.1, pp. 57-61, 2014.
  3. [3] G. Gu, R. Hu, and Y. Li, “Study on Identification of Damage to Wind Turbine Blade Based on Support Vector Machine and Particle Swarm Optimization,” J. Robot. Mechatron., Vol.27, No.3, pp. 244-250, 2015.
  4. [4] H. Kalantarian, N. Alshurafa, M. Pourhomayoun, S. Sarin, T. Le, and M. Sarrafzadeh, “Spectrogram-Based Audio Classification of Nutrition Intake,” 2014 IEEE Health Innovations and Point-of-Care Technologies Conf. (HIC 2014), pp. 161-164, 2014.
  5. [5] A. Kamiyanagi, Y. Sumita, M. Chikai, K. Kimura, Y. Seki, S. Ino, and H. Taniguchi, “Evaluation of Swallowing Sound Using a Throat Microphone with an AE Sensor in Patients Wearing Palatal Augmentation Prosthesis,” J. Adv. Comput. Intell. Intell. Inform., Vol.21, No.3, pp. 573-580, 2017.
  6. [6] R. Kojima, O. Sugiyama, K. Hoshiba, K. Nakadai, R. Suzuki, and C. E. Taylor, “Bird Song Scene Analysis Using Spacial-Cue-Based Probabilistic Model,” J. Robot. Mechatron., Vol.29, No.1, pp. 236-246, 2017.
  7. [7] T. P. Tomo, A. Schmitz, G. Enriquez, S. Hashimoto, and S. Sugano, “Wayang Robot with Gamelan Music Pattern Recognition,” J. Robot. Mechatron., Vol.29, No.1, pp. 137-145, 2017.
  8. [8] G. Yu and J.-J. Slotine, “Audio Classification from Time-Frequency Texture,” Proc. of 2009 IEEE Int. Conf. on Acoustics, Speech and Signal Processing, pp. 1677-1680, 2009.
  9. [9] A. Mondal and H. Tang, “Respiratory Sounds Classification Using Statistical Biomarker,” Proc. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc. EMBS, pp. 2952-2955, 2017.
  10. [10] D. Stowell, D. Giannoulis, E. Benetos, M. Lagrange, and M. D. Plumbley, “Detection and Classification of Acoustic Scenes and Events,” IEEE Trans. on Multimedia, Vol.17, No.10, pp. 1733-1746, 2015.
  11. [11] J. D. Krijnders and G. A. ten Holt, “A Tone-Fit Feature Representation for Scene Classification,” Proc. of IEEE Int. Conf. Acoust. Speech Signal Process (ICASSP), 2013.
  12. [12] E. Iranmehr, S. B. Shouraki, and M. M. Faraji, “Unsupervised Feature Selection for Phoneme Sound Classification using Particle Swarm Optimization,” 2017 5th Iranian Joint Congress on Fuzzy and Intelligent Systems (CFIS), pp. 86-90, 2017.
  13. [13] L. Chen, S. Gunduz, and M. T. Ozsu, “Mixed Type Audio Classification with Support Vector Machine,” IEEE Int. Conf. on Multimedia Exposition, pp. 781-784, 2006.
  14. [14] S. D. Mainkar and S. P. Mahajan, “EMD based Efficient Discrimination of Real-world Environmental Sounds using SVM classifier,” 2015 Int. Conf. Inf. Process. (ICIP), pp. 272-277, 2015.
  15. [15] C. Okuyucu, M. Sert, and A. Yazici, “Audio Feature and Classifier Analysis for Efficient Recognition of Environmental Sounds,” Proc. 2013 IEEE Int. Symp. Multimedia (ISM 2013), pp. 125-132, 2013.
  16. [16] A. J. Eronen, V. T. Peltonen, J. T. Tuomi, A. P. Klapuri, S. Fagerlund, T. Sorsa, G. Lorho, and J. Huopaniemi, “Audio-Based Context Recognition,” IEEE Trans. Audio, Speech Lang. Process., Vol.14, No.1, pp. 321-329, 2006.
  17. [17] Y. Aytar, C. Vondrick, and A. Torralba, “SoundNet: Learning Sound Representations from Unlabeled Video,” Proc. 30th Conf. on Neural Inf. Processing Systems, pp. 892-900, 2016.
  18. [18] M. Huzaifah, “Comparison of Time-Frequency Representations for Environmental Sound Classification using Convolutional Neural Networks,” https://arxiv.org/abs/1706.07156, 2017.
  19. [19] K. J. Piczak, “Environmental Sound Classification with Convolutional Neural Networks,” Proc. of 2015 IEEE 25th Int. Workshop on Machine Learning for Signal Processing (MLSP), pp. 1-6, 2015.
  20. [20] J. Salamon and J. P. Bello, “Deep Convolutional Neural Networks and Data Augmentation for Acoustic Event Detection,” IEEE Sig. Processing Letters, Vol.24, No.3, pp. 279-283, 2017.
  21. [21] J. Ren, X. Jiang, J. Yuan, and N. Magnenat-Thalmann, “Sound-Event Classification Using Robust Texture Features for Robot Hearing,” IEEE Trans. Multimed., Vol.19, No.3, pp. 447-458, 2017.
  22. [22] K. Z. Thwe and N. War, “Environmental Sound Classification based on Time-frequency Representation,” IEEE SNPD 2017, pp. 251-255, 2017.
  23. [23] J. Dennis, H. D. Tran, and H. Li, “Spectrogram image feature for sound event classification in mismatched conditions,” IEEE Signal Process. Lett., Vol.18, No.2, pp. 130-133, 2011.
  24. [24] Y. Li and G. Liu, “Sound Classification based on Spectrogram for Surveillance Applications,” Proc. of 2016 IEEE Int. Conf. on Network Infrastructure and Digital Content (IC-NIDC), pp. 293-297, 2017.
  25. [25] A. Rakotomamonjy and G. Gass, “Histogram of Gradients of Time-Frequency Representations for Audio Scene Classification,” IEEE AASP Challenge on Detection and Classification of Acoustic Scenes and Events, pp. 6-7, 2013.
  26. [26] D. Carmel, A. Yeshurun, and Y. Moshe, “Detection of Alarm Sounds in Noisy Environments,” 25th European Signal Processing Conf. (EUSIPCO), pp. 1889-1893, 2017.
  27. [27] Y. Li and Z. Wu, “Animal Sound Recognition based on Double Feature of Spectrogram in Real Environment,” Proc. 2015 Int. Conf. on Wireless Communications Signal Processing (WCSP), pp. 1-5, 2015.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 22, 2024