single-jc.php

JACIII Vol.20 No.1 pp. 124-131
doi: 10.20965/jaciii.2016.p0124
(2016)

Paper:

Design and Development of an Artificial Intelligent System for Audio-Visual Cancer Breast Self-Examination

Robert Kerwin C. Billones, Elmer P. Dadios, and Edwin Sybingco

De La Salle University
2401 Taft Avenue, Manila 0922, Philippines

Received:
June 6, 2015
Accepted:
September 29, 2015
Online released:
January 19, 2016
Published:
January 20, 2016
Keywords:
artificial intelligent system, intelligent operating architecture, computer vision, speech processing, breast self-examination
Abstract
This paper presents the development of a computer system for breast cancer awareness and education, particularly, in proper breast self-examination (BSE) performance. It includes the design and development of an artificial intelligent system (AIS) for audio-visual BSE which is capable of computer vision (CV), speech recognition (SR), speech synthesis (SS), and audio-visual (AV) feedback response. The AIS is named BEA, an acronym for Breast Examination Assistant, which acts like a virtual health care assistant that can assist a female user in performing proper BSE. BEA is composed of four interdependent modules: perception, memory, intelligence, and execution. Collectively, these modules are part of an intelligent operating architecture (IOA) that runs the BEA system. The methods of development of the individual subsystems (CV, SR, SS, and AV feedback) together with the intelligent integration of these components are discussed in the methodology section. Finally, the authors presented the results of the tests performed in the system.
Cite this article as:
R. Billones, E. Dadios, and E. Sybingco, “Design and Development of an Artificial Intelligent System for Audio-Visual Cancer Breast Self-Examination,” J. Adv. Comput. Intell. Intell. Inform., Vol.20 No.1, pp. 124-131, 2016.
Data files:
References
  1. [1] WHO|IARC, “GLOBOCAN World Health Organization International Agency for Research on Cancer,” 2012.
    http://globocan.iarc.fr/Pages/fact_sheets_population.aspx
    [Accessed March 4, 2015]
  2. [2] P. Ramakant, E. S. Forgach, J. Rendo, J. M. Chaparro, C. S. Basurto, M. Margaritoni, and G. Agarwal, “Breast Cancer Care in Developing Countries,” World Journal Surgery, Vol.33, No.10, pp. 2069-2076, 2009.
  3. [3] D. Parkin, C. Ngelangel, D. Esteban, L. Gibson, M. Munson, M. Reyes, A. Laudico, and P. Pisani, “Outcome of screening by clinical examination of the breast in a trial in the Philippines,” Int. J. of Cancer, Vol.119, No.1, pp. 149-154, 2006.
  4. [4] M. Salagar, P. Kulkarni and S. Gondane, “Educating and Creating Social Awareness for Sensitive Topics Using Mobile Applications,” IEEE Conf. Publications, 2013 IEEE Int. Conf. in MOOC Innovation and Technology in Education (MITE), pp. 335-336, 2013.
  5. [5] S. Chen, Q. Cheng, R. Naguib, and A. Oikonomou, “Hand Pressure Detection Among Image Sequence in Breast Self-Examination Multimedia System,” IEEE Conf. Publications, Int. Forum on Information Technology and Applications 2009 (IFITA ’09), Vol.3, p. 127, 2009.
  6. [6] A. Oikonomou, S. Amin, R. Naguib, A. Todman, and H. Al-Omishy, “IRiS: An Interactive Reality System for Breast Self-Examination Training,” Engineering in Medicine and Biology Society 2004 (IEMBS ’04), 26th Annual Int. Conf. of the IEEE, Vol.2, 2004.
  7. [7] N. Talib, N. Zakaria, and S. Ramadass, “Teleconsultations in Breast Self-Examination (BSE) practice: Alternative solution for early detection of Breast Cancer,” IEEE Conf. Publications, Int. Symp. on Information Technology 2008 (ITSim 2008), Vol.1, pp. 1-7, 2008.
  8. [8] M. Cabatuan, E. Dadios, and R. Naguib, “Computer vision-based breast self-examination palpation pressure level classification using artificial neural networks and wavelet transforms,” TENCON 2012 – 2012 IEEE Region 10 Conf., 2012.
  9. [9] M. Eman, M. Cabatuan, E. Dadios, and L. G. Lim, “Detecting and tracking female breasts using neural network in real-time,” TENCON 2013 – 2013 IEEE Region 10 Conf. (31194), 2013.
  10. [10] J. Jose, M. Cabatuan, E. Dadios, and L. G. Lim, “Depth estimation in monocular Breast Self-Examination image sequence using optical flow,” 2014 Int. Conf. on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment and Management (HNICEM), 2014.
  11. [11] R. A. A. Masilang, M. K. Cabatuan, E. P. Dadios, and L. G. Lim, “Computer-aided BSE torso tracking algorithm using neural networks, contours, and edge features,” TENCON 2014 – 2014 IEEE Region 10 Conf., 2014.
  12. [12] R. Masilang, M. Cabatuan, and E. Dadios, “Hand initialization and tracking using a modified KLT tracker for a computer vision-based breast self-examination system,” 2014 Int. Conf. on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment and Management (HNICEM), 2014.
  13. [13] R. Billones and E. Dadios, “Hiligaynon language 5-word vocabulary speech recognition using Mel frequency cepstrum coefficients and genetic algorithm,” 2014 Int. Conf. on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment and Management (HNICEM), 2014.
  14. [14] M. Sharmin, S. Ahmed, S. Ahamed, M. Haque, and A. Khan, “Healthcare aide: towards a virtual assistant for doctors using pervasive middleware,” 4th Annual IEEE Int. Conf. on Pervasive Computing and Communications Workshops 2006 (PerCom Workshops 2006), p. 495, 2006.
  15. [15] J.-H. Kim, S.-H. Choi, I.-W. Park, and S. Zaheer, “Intelligence Technology for Robots That Think,” IEEE Computational Intelligence Magazine, Vol.8, No.3, pp. 70-84, 2013.
  16. [16] X. Wang and Z. Han, “A Novel Acoustic Feature Extraction Algorithm Based on Root Cepstrum Coefficients and CCBC for Robust Speech Recognition,” 2nd Int. Symp. on Intelligent Information Technology Application (IITA ’08), Vol.1, 2008.
  17. [17] M. Karthikadevi and K. Srinivasagan, “The development of syllable based text to speech system for Tamil language,” 2014 Int. Conf. on Recent Trends in Information Technology (ICRTIT), pp. 1-6, 2014.
  18. [18] P. Priyadarshani, N. Dias, and A. Punchihewa, “Genetic Algorithm Approach for Sinhala Speech Recognition,” 2012 IEEE 55th Int. Midwest Symposium on Circuits and Systems (MWSCAS), pp. 896-899, 2012.
  19. [19] Z. Handley and M.-J. Hamel, “Establishing A Methodology for Benchmarking Speech Synthesis for Computer-Assisted Language Learning (CALL),” Language Learning & Technology, Vol.9, No.3, pp. 99-120, September 2005.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 22, 2024