single-au.php

IJAT Vol.19 No.3 pp. 258-267
doi: 10.20965/ijat.2025.p0258
(2025)

Research Paper:

Classification Method of Corneocytes from Brilliant Green-Stained Images Using Deep Learning

Koichiro Enomoto*1,*2,† ORCID Icon, Ren Yasuda*1, Taeko Mizutani*3, Yuri Okano*3 ORCID Icon, and Takenori Tanaka*4

*1The University of Shiga Prefecture
2500 Hassaka-cho, Hikone-shi, Shiga 522-8533, Japan

*2Regional ICT Research Center of Human, Industry and Future, The University of Shiga Prefecture
Hikone, Japan

*3CIEL Co., Ltd.
Sagamihara, Japan

*4Niigata SL Co., Ltd.
Niigata, Japan

Corresponding author

Received:
November 25, 2024
Accepted:
January 30, 2025
Published:
May 5, 2025
Keywords:
stratum corneum, corneocytes, BG stained image, image diagnosis support system, deep learning
Abstract

The number of parakeratotic corneocytes is an important parameter for diagnosing stratum corneum conditions. However, parakeratotic corneocytes are often visually diagnosed by an expert, which involves human error and is time-consuming. In this study, we proposed a method for classifying corneocytes, parakeratotic corneocytes, and ghost nucleus corneocytes. Our proposed system extracts each corneocyte region from a BG-stained image using a trained cell-specific deep learning model. We evaluated a method to classify corneocytes, parakeratotic corneocytes, and ghost nucleus corneocytes using different deep learning models: VGG16, VGG19, EfficientNet, EfficientNetV2, and Vision Transformer. The results showed that Vision Transformer achieved a 99.08% accuracy rate, which was sufficient for the diagnosis of stratum corneum conditions via imaging.

Cite this article as:
K. Enomoto, R. Yasuda, T. Mizutani, Y. Okano, and T. Tanaka, “Classification Method of Corneocytes from Brilliant Green-Stained Images Using Deep Learning,” Int. J. Automation Technol., Vol.19 No.3, pp. 258-267, 2025.
Data files:
References
  1. [1] N. Kashibuchi and Y. Muramatsu, “Exfoliative Cytology for Morphological Evaluation of Skin,” J. Soc. Cosmet., Vol.23, No.1, pp. 55-57, 1989 (in Japanese). https://doi.org/10.5107/sccj.23.55
  2. [2] A. J. Hughes, S. S. Tawfik, E. A. O’Toole, and R. F. L. O’Shaughnessy, “Tape strips in dermatology research,” British J. of Dermatology, Vol.185, No.1, pp. 26-35, 2021. https://doi.org/10.1111/bjd.19760
  3. [3] T. Okuyama, K. Enomoto, T. Tanaka, T. Mizutani, Y. Okano, H. Masaki, and T. Hayashi, “Development and Evaluation of Measurement Support System for Stratum Corneum Cells by Touch Teaching,” Dynamic Image Processing for Real Application Workshop, pp. 366-369, 2018.
  4. [4] M. Yamamoto-Tanaka, T. Makino, A. Motoyama, M. Miyai, R. Tsuboi, and T. Hibino, “Multiple pathways are involved in DNA degradation during keratinocyte terminal differentiation,” Cell Death and Disease, Vol.5, No.4, Article No.e1181, 2014. https://doi.org/10.1038/cddis.2014.145
  5. [5] M. Egawa, S. Iwanaga, J. Hosoi, M. Goto, H. Yamanishi, M. Miyai, C. Katagiri, K. Tokunaga, T. Asai, and Y. Ozeki, “Label-free stimulated Raman scattering microscopy visualizes changes in intracellular morphology during human epidermal keratinocyte differentiation,” Scientific Reports, Vol.9, No.1, Article No.12601, 2019. https://doi.org/10.1038/s41598-019-49035-x
  6. [6] Y. Takemae, H. Saito, and S. Ozawa, “The Evaluating System of Human Skin Surface Condition by Image Processing,” Trans. of the Society of Instrument and Control Engineers, Vol.37, No.11, pp. 1097-1103, 2001. https://doi.org/10.9746/sicetr1965.37.1097
  7. [7] P. Vatiwutipong, S. Vachmanus, T. Noraset, and S. Tuarob, “Artificial Intelligence in Cosmetic Dermatology: A Systematic Literature Review,” IEEE Access, Vol.11, pp. 71407-71425, 2023. https://doi.org/10.1109/ACCESS.2023.3295001
  8. [8] H. K. Jeong, C. Park, R. Henao, and M. Kheterpal, “Deep Learning in Dermatology: A Systematic Review of Current Approaches, Outcomes, and Limitations,” JID Innovations, Vol.3, No.1, Article No.100150, 2023. https://doi.org/10.1016/j.xjidi.2022.100150
  9. [9] CIEL Co., Ltd., “Corneocytemetry,” (in Japanese). https://corneocytemetry.com/ [Accessed February 21, 2024]
  10. [10] Y. Matsuoka, “Automatic Cell discrimination by Imaging Flow Cytometry and Machine Learning,” Cytometry Research, Vol.30, No.1, pp. 15-19, 2020 (in Japanese). https://doi.org/10.18947/cytometryresearch.30.1_15_19
  11. [11] C. Stringer, T. Wang, M. Michaelos, and M. Pachitariu, “Cellpose: a generalist algorithm for cellular segmentation,” Nature Methods, Vol.18, pp. 100-106, 2021. https://doi.org/10.1038/s41592-020-01018-x
  12. [12] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” Medical Image Computing and Computer-Assisted Intervention, pp. 234-241, 2015. https://doi.org/10.1007/978-3-319-24574-4_28
  13. [13] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” Int. Conf. on Learning Representations, 2015. https://doi.org/10.48550/arXiv.1409.1556
  14. [14] M. Tan and Q. V. Le, “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks,” Proc. 36th Int. Conf. on Machine Learning, 2019. https://doi.org/10.48550/arXiv.1905.11946
  15. [15] M. Tan and Q. V. Le, “EfficientNetV2: Smaller Models and Faster Training,” Int. Conf. on Machine Learning, 2021. https://doi.org/10.48550/arXiv.2104.00298
  16. [16] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, J. Uszkoreit, and N. Houlsby, “An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale,” Int. Conf. on Learning Representations, 2021.
  17. [17] B. Wu, C. Xu, X. Dai, A. Wan, P. Zhang, M. Tomizuka, K. Keutzer, and P. Vajda, “Visual transformers: Token-based image representation and processing for computer vision,” arXiv:2006.03677, 2020. https://doi.org/10.48550/arXiv.2006.03677
  18. [18] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, “Grad-cam: Visual explanations from deep networks via gradient-based localization,” IEEE Int. Conf. on Computer Vision, pp. 618-626, 2017. https://doi.org/10.1109/ICCV.2017.74
  19. [19] J. Deng, W. Dong, R. Socher, L. J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” IEEE Conf. on Computer Vision and Pattern Recognition, pp. 248-255, 2009. https://doi.org/10.1109/CVPR.2009.5206848
  20. [20] H. T. Le, C. Cerisara, and A. Denis, “Do Convolutional Networks need to be Deep for Text Classification?,” arXiv:1707.04108, 2017. https://doi.org/10.48550/arXiv.1707.04108
  21. [21] S. Paul and P. Y. Chen, “Vision Transformers are Robust Learners,” IEEE Conf. on Computer Vision and Pattern Recognition, arXiv:2105.07581, 2021. https://doi.org/10.48550/arXiv.2105.07581
  22. [22] R. Conrad and K. Narayan, “CEM500K, a large-scale heterogeneous unlabeled cellular electron microscopy image dataset for deep learning,” Elife, Vol.10, Article No.e65894, 2021. https://doi.org/10.7554/elife.65894

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on May. 08, 2025