single-au.php

IJAT Vol.19 No.4 pp. 642-650
doi: 10.20965/ijat.2025.p0642
(2025)

Research Paper:

Automated Detection of Harmful Red Tide Phytoplankton Using Deep Learning-Based Object Detection Models

Tomoka Kawano*1,†, Masahiro Migita*1 ORCID Icon, Kaito Kamimura*2 ORCID Icon, Atsushi Urabe*3 ORCID Icon, Haruo Yamaguchi*4 ORCID Icon, Setsuko Sakamoto*5, Yuji Tomaru*5 ORCID Icon, and Masashi Toda*1 ORCID Icon

*1Kumamoto University
2-40-1 Kurokami Chuo-ku, Kumamoto, , Japan

Corresponding author

*2Kochi Prefectural Fisheries Experimental Station
Susaki, Japan

*3Fisheries Management Division, Fisheries Promotion Department, Kochi Prefectural Government
Kochi, Japan

*4Faculty of Agriculture and Marine Sciences, Kochi University
Nankoku, Japan

*5Fisheries Technology Institute, Japan Fisheries Research and Education Agency
Hatsukaichi, Japan

Received:
December 1, 2024
Accepted:
February 12, 2025
Published:
July 5, 2025
Keywords:
red tide detection, deep learning, object detection
Abstract

Red tides are phenomena caused by the abnormal proliferation of marine phytoplankton, leading to mass fish mortality and severe economic damage to fisheries. Currently, the detection and quantification of harmful phytoplankton rely primarily on manual inspection using optical microscopes. This process is time-consuming, labor-intensive, and requires specialized expertise in species identification. In this study, we propose an automated detection system using deep learning-based object detection methods to classify various marine phytoplankton species from microscopic images and identify harmful red tide-related species. Our approach aims to enhance early detection capabilities, reduce the burden on researchers, and improve the accuracy of harmful phytoplankton monitoring.

Cite this article as:
T. Kawano, M. Migita, K. Kamimura, A. Urabe, H. Yamaguchi, S. Sakamoto, Y. Tomaru, and M. Toda, “Automated Detection of Harmful Red Tide Phytoplankton Using Deep Learning-Based Object Detection Models,” Int. J. Automation Technol., Vol.19 No.4, pp. 642-650, 2025.
Data files:
References
  1. [1] A. Lumini and L. Nanni, “Deep learning and transfer learning features for plankton classification,” Ecological Informatics, Vol.51, pp. 33-43, 2019. https://doi.org/10.1016/j.ecoinf.2019.02.007
  2. [2] F. Zhao et al., “Binary SIPPER plankton image classification using random subspace,” Neurocomputing, Vol.73, Nos.10-12, pp. 1853-1860, 2010. https://doi.org/10.1016/j.neucom.2009.12.033
  3. [3] H. Zheng et al., “Automatic plankton image classification combining multiple view features via multiple kernel learning,” BMC Bioinformatics, Vol.18, Article No.570, 2017. https://doi.org/10.1186/s12859-017-1954-8
  4. [4] I. Correa et al., “Supervised microalgae classification in imbalanced dataset,” 2016 5th Brazilian Conf. on Intelligent Systems (BRACIS), pp. 49-54, 2016. https://doi.org/10.1109/BRACIS.2016.020
  5. [5] S. Hong et al., “Classification of freshwater zooplankton by pre-trained convolutional neural network in underwater microscopy,” Int. J. of Advanced Computer Science and Applications, Vol.11, No.7, pp. 252-258, 2020. https://doi.org/10.14569/IJACSA.2020.0110733
  6. [6] Y. Li et al., “Toward in situ zooplankton detection with a densely connected YOLOV3 model,” Applied Ocean Research, Vol.114, Article No.102783, 2021. https://doi.org/10.1016/j.apor.2021.102783
  7. [7] J. S. Ellen et al., “Improving plankton image classification using context metadata,” Limnology and Oceanography: Methods, Vol.17, No.8, pp. 439-461, 2019. https://doi.org/10.1002/lom3.10324
  8. [8] I. Correa et al., “Deep learning for microalgae classification,” 2017 16th IEEE Int. Conf. on Machine Learning and Applications (ICMLA), pp. 20-25, 2017. https://doi.org/10.1109/ICMLA.2017.0-183
  9. [9] K. Schulze et al., “PlanktoVision – An automated analysis system for the identification of phytoplankton,” BMC Bioinformatics, Vol.14, Article No.115, 2013. https://doi.org/10.1186/1471-2105-14-115
  10. [10] J. Figueroa et al., “Phytoplankton detection and recognition in freshwater digital microscopy images using deep learning object detectors,” Heliyon, Vol.10, No.3, Article No.e25367, 2024. https://doi.org/10.1016/j.heliyon.2024.e25367
  11. [11] D. Rivas-Villar et al., “Fully automatic detection and classification of phytoplankton specimens in digital microscopy images,” Computer Methods and Programs in Biomedicine, Vol.200, Article No.105923, 2021.
  12. [12] S. Ren et al., “Faster R-CNN: towards real-time object detection with region proposal networks,” IEEE Trans. Pattern Anal. Mach. Intell, Vol.39, No.6, pp. 1137-1149, 2017. https://doi.org/10.1109/TPAMI.2016.2577031
  13. [13] T.-Y. Lin et al., “Focal loss for dense object detection,” IEEE Trans. Pattern Anal. Mach. Intell, Vol.42, No.2, pp. 318-327, 2020. https://doi.org/10.1109/TPAMI.2018.2858826
  14. [14] H. Zhang et al., “DINO: DETR with improved denoising anchor boxes for end-to-end object detection,” arXiv:2203.03605, 2022. https://doi.org/10.48550/arXiv.2203.03605
  15. [15] A. Vaswani et al., “Attention is all you need,” NeurIPS, 2017.
  16. [16] Tzutalin. Labelimg. git code (2015). https://github.com/tzutalin/labelImg [Accessed January 29, 2025]
  17. [17] J. Redmon et al., “You only look once: unified, real-time object detection,” IEEE Conf. on Computer Vision and Pattern Recognition, pp. 779-788, 2016. https://doi.org/10.1109/CVPR.2016.91
  18. [18] W. Liu et al., “SSD: Single shot MultiBox detector,” European Conference on Computer Vision (ECCV 2016), pp. 21-37, 2016. https://doi.org/10.1007/978-3-319-46448-0_2
  19. [19] R. Girshick et al., “Rich feature hierarchies for accurate object detection and semantic segmentation,” 2014 IEEE Conf. on Computer Vision and Pattern Recognition, pp. 580-587, 2014. https://doi.org/10.1109/CVPR.2014.81
  20. [20] R. Girshick, “Fast R-CNN,” 2015 IEEE Int. Conf. on Computer Vision (ICCV), pp. 1440-1448, 2015. https://doi.org/10.1109/ICCV.2015.169
  21. [21] T.-Y. Lin et al., “Feature pyramid networks for object detection,” 2017 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 936-944, 2017. https://doi.org/10.1109/CVPR.2017.106
  22. [22] N. Carion et al., “End-to-end object detection with transformers,” European Conf. on Computer Vision, pp. 213-229, 2020.
  23. [23] X. Zhu et al., “Deformable DETR: Deformable transformers for end-to-end object detection,” arXiv:2010.04159, 2020. https://doi.org/10.48550/arXiv.2010.04159
  24. [24] T.-Y. Lin et al., “Microsoft COCO: common objects in context,” European Conference on Computer Vision (ECCV 2014), pp. 740-755, 2014. https://doi.org/10.1007/978-3-319-10602-1_48
  25. [25] K. He et al., “Deep residual learning for image recognition,” Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 770-778, 2016. https://doi.org/10.1109/CVPR.2016.90
  26. [26] J. Deng et al., “ImageNet: A large-scale hierarchical image database,” 2009 IEEE Conf. on Computer Vision and Pattern Recognition, pp. 248-255, 2009. https://doi.org/10.1109/CVPR.2009.5206848

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Jul. 04, 2025