Multiscale Image Aggregation for Dental Radiograph Segmentation
Martin Leonard Tangel*, Chastine Fatichah*,
Muhammad Rahmat Widyanto**, Fangyan Dong*,
and Kaoru Hirota*
*Department of Computational Intelligence and Systems Science, Tokyo Institute of Technology, G3-49, 4259 Nagatsuta, Midori-ku, Yokohama 226-8502, Japan
**Faculty of Computer Science, University of Indonesia, Depok Campus, Depok 16424, West Java, Indonesia
Multiscale Image Aggregation (MIA) is proposed for dental radiograph segmentation, where a grayscale image segmentation method using neighborhood pixels evaluation and fuzzy inference is applied to its original image and three scaled-down images. The average segmentation result by employing the proposed method is more accurate than that obtained by employing the Otsu method, and it is robust against inconsistent contrast, uneven exposure, and pixel’s noise of the radiograph. An experiment is performed using 122 dental radiographs covering periapical and bitewing radiographs from the Faculty of Dentistry, University of Indonesia, which represent the real radiographs used in dentistry and forensics, and 77.7% average segmentation accuracy is obtained by comparing each automatic segmentation result with the corresponding manual segmentation result as a reference. This proposal is a crucial part in our automatic dental-based identification system that is under development. Since manual dental-based identification is widely used for personal identification, an accurate automatic dental-based identification system is helpful in assisting forensic experts in identifying a large number of victims. Thus it makes identification of victims of disasters such as the Indian Ocean Tsunami and Tohoku Earthquake manageable.
Muhammad Rahmat Widyanto, Fangyan Dong, and
and Kaoru Hirota, “Multiscale Image Aggregation for Dental Radiograph Segmentation,” J. Adv. Comput. Intell. Intell. Inform., Vol.16, No.3, pp. 388-396, 2012.
-  M. Petju et al., “Importance of Dental Records for Victim Identification Following the Indian Ocean Tsunami Disaster in Thailand,” Elsevier The Royal Institute of Public Health, Vol.121, No.4, pp. 251-257, 2007.
-  L. G. Shapiro and G. C. Stockman, “Computer Vision,” pp. 279-324, Prentice-Hall, 2001.
-  A. K. Jain and H. Chen, “Matching of Dental X-Ray Images for Human Identification,” Pattern Recognition, Elsevier, Vol.37, No.7, pp. 1519-1532, 2004.
-  O. Nomir et al., “Hierarchical Contour Matching for Dental X-Ray Radiographs,” Pattern Recognition, Elsevier, Vol.41, No.1, pp. 130-138, 2008.
-  M. Abdel-Mottaleb et al., “Challenges of Developing an Automated Dental Identification System,” Micro-Nano Mechatronics and Human Science, IEEE, Vol.1, pp. 411-414, 2003.
-  Y. H. Lai and P. L. Lin, “Effective Segmentation for Dental XRay Images Using Texture-Based Fuzzy Inference System,” LCNS 5259 Springer-Verlag, Vol.5259, pp. 936-947, 2008.
-  H. Gao and O. Chae, “Individual Tooth Segmentation from CT Images Using Level Set Method With Shape and Intensity Prior,” Pattern Recognition, Elsevier, Vol.43, No.7, pp. 2406-2417, 2010.
-  M. L. Tangel et al., “Fuzzy Logic based EmotionalMulti Agent System Approach for Dental X-Ray Image Segmentation,” Int. Symposium on Intelligent System Conf. 2010, Tokyo Metropolitan University, Japan, 2010.
-  G.W. Corder and D. I. Foreman, “Nonparametric Statistics for Non-Statisticians,” pp. 39-52, Wiley, 2009.
-  J. Zhang and S. Kamata, “Adaptive Local Contrast Enhancement for the Visualization of High Dynamic Range Images,” Pattern Recognition, IEEE, pp. 1-4, ICPR 2008.
-  http://radonc.ucsf.edu/research_group/jpouliot/Tutorial/HU/Lesson7.htm, Access time: January 2011.
-  P. Soille, “Morphological Image Analysis: Principles and Applications,” pp. 173-174, Springer-Verlag, 1999.
-  N. Otsu, “A Threshold Selection Method from Gray-Level Histogram,” IEEE Trans. on SMC, Vol.9, No.1, pp. 62-66, 1979.
-  M. L. Tangel et al., “Multiscale Image Analysis for Radiograph Segmentation,” World Congress of Int. Fuzzy System Association 2011, Surabaya-Bali, Indonesia, 2011.
This article is published under a Creative Commons Attribution-NoDerivatives 4.0 International License.