JACIII Vol.26 No.2 pp. 138-146
doi: 10.20965/jaciii.2022.p0138


Data Augmentation Using Generative Adversarial Networks for Multi-Class Segmentation of Lung Confocal IF Images

Daiki Katsuma*, Hiroharu Kawanaka*, V. B. Surya Prasath**,***, and Bruce J. Aronow**,***

*Graduate School of Engineering, Mie University
1577 Kurima-machiya, Tsu, Mie 514-8507, Japan

**Division of Biomedical Informatics, Cincinnati Childrens Hospital Medical Center
3333 Burnet Aveue, Cincinnati, OH 45229, USA

***Department of Pediatrics, University of Cincinnati College of Medicine
Cincinnati, OH 45257, USA

September 3, 2021
December 7, 2021
March 20, 2022
immunofluorescence image, segmentation, data augmentation, image synthesis, generative adversarial networks

The human lung is a complex organ with high cellular heterogeneity, and its development and maintenance require interactive gene networks and dynamic cross-talk among multiple cell types. We focus on the confocal immunofluorescent (IF) images of lung tissues from the LungMAP database to reveal lung development. Using the current state-of-the-art deep learning-based model, the authors consider obtaining accurate multi-class segmentation of lung confocal IF images. One of the primary bottlenecks in using deep Convolutional Neural Network (CNN) models is the lack of availability of large-scale training or ground-truth segmentation labels. Then, we implement the multi-class segmentation with Generative Adversarial Network (GAN) models to expand the training dataset, improve overall segmentation accuracy, and discuss the effectiveness of created synthetic images in the segmentation of IF images. Consequently, experimental results indicated that 15.1% increased the accuracy of six-class segmentation using Mask R-CNN. In particular, the accuracy of our few data was mainly improved by using our proposed method. Therefore, the synthetic dataset can moderate the imbalanced data and be used for expanding the dataset.

Cite this article as:
D. Katsuma, H. Kawanaka, V. Prasath, and B. Aronow, “Data Augmentation Using Generative Adversarial Networks for Multi-Class Segmentation of Lung Confocal IF Images,” J. Adv. Comput. Intell. Intell. Inform., Vol.26 No.2, pp. 138-146, 2022.
Data files:
  1. [1] N. Howlader, A. Noone, M. Krapcho et al. (Eds.), “SEER Cancer Statistics Review, 1975-2012,” National Cancer Institute, [accessed July 16, 2021]
  2. [2] M. Herriges and E. Morrisey, “Lung development: orchestrating the generation and regeneration of a complex organ,” Development, Vol.141, pp. 502-513, 2014.
  3. [3] “LungMAP,” [accessed September 1, 2021]
  4. [4] N. Gaddis, J. Fortriede, M. Guo et al., “LungMAP Portal Ecosystem: Systems-Level Exploration of the Lung,” bioRxiv, doi: 10.1101/2021.12.05.471312, 2021.
  5. [5] M. Ardini-Poleske, R. Clark, C. Ansong et al., “LungMAP: The Molecular Atlas of Lung Development Program,” American J. of Physiology: Lung Cellular and Molecular Physiology, Vol.313, No.5, pp. L733-L740, 2017.
  6. [6] S. Isaka, H. Kawanaka, B. J. Aronow, and V. S. Prasath, “Multi-Class Segmentation of Lung Immunofluorescence Confocal Images Using Deep Learning,” 2019 IEEE Int. Conf. on Bioinformatics and Biomedicine (BIBM), pp. 2362-2368, 2019.
  7. [7] S. Isaka, H. Kawanaka, V. B. S. Prasath, B. J. Aronow, and S. Tsuruoka, “Development of a Web Based Image Annotation Tool for Lung Immunofluorescent Confocal Images,” Int. Symp. on Affective Science and Engineering (ISASE2018), doi: 10.5057/isase.2018-C000036, 2018.
  8. [8] M. E. Ardini-Poleske, T. J. Mariani, G. S. Pryhuber, R. S. Misra, and The LungMAP Consortium, “Chapter 4 – Initiating Multiomics Approach to Understand Neonatal Chronic Lung Disease: the LungMAP Experience,” S. G. Kallapur and G. S. Pryhuber (Eds.), “Updates on Neonatal Chronic Lung Disease,” pp. 45-59, Elsevier, 2020.
  9. [9] L. Barbe, E. Lundberg, P. Oksvold et al., “Toward a Confocal Subcellular Atlas of the Human Proteome,” Molecular & Cellular Proteomics, Vol.7, No.3, pp. 499-508, 2008.
  10. [10] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-To-Image Translation With Conditional Adversarial Networks,” Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2017.
  11. [11] T. Wang, M.-Y. Liu, J.-Y. Zhu, A. Tao, J. Kautz, and B. Catanzaro, “High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs,” 2018 IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 8798-8807, 2018.
  12. [12] T. Park, M.-Y. Liu, T. Wang, and J.-Y. Zhu, “Semantic Image Synthesis With Spatially-Adaptive Normalization,” 2019 IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 2332-2341, 2019.
  13. [13] Y. Zhang, “Deep Generative Model For Multi-Class Imbalanced Learning,” Master’s Theses, University of Rhode Island, 2018.
  14. [14] S. Kazeminia, C. Baur, A. Kuijper, B. van Ginneken, N. Navab, S. Albarqouni, and A. Mukhopadhyay, “GANs for medical image analysis,” Artificial Intelligence in Medicine, Vol.109, Article No.101938, 2020.
  15. [15] Z. Xu, C. F. Moro, B. Bozóky, and Q. Zhang, “GAN-based Virtual Re-Staining: A Promising Solution for Whole Slide Image Analysis,” arXiv preprint, arXiv:1901.04059, 2019.
  16. [16] X. Gong, S. Chen, B. Zhang, and D. Doermann, “Style Consistent Image Generation for Nuclei Instance Segmentation,” Proc. of the IEEE/CVF Winter Conf. on Applications of Computer Vision (WACV), pp. 3994-4003, 2021.
  17. [17] R. Ranjan, S. Inoue, and T. Shibata, “Synthesizing Cell Protein data for Human Protein Cell Profiling Using Dual Deep Generative Modeling,” 2020 Joint 9th Int. Conf. on Informatics, Electronics Vision (ICIEV) and 2020 4th Int. Conf. on Imaging, Vision Pattern Recognition (icIVPR), 2020.
  18. [18] L. Hou, A. Agarwal, D. Samaras, T. M. Kurc, R. R. Gupta, and J. H. Saltz, “Unsupervised histopathology image synthesis,” arXiv preprint, arXiv:1712.05021, 2017.
  19. [19] K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask R-CNN,” 2017 IEEE Int. Conf. on Computer Vision (ICCV), pp. 2980-2988, 2017.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on May. 19, 2024