single-jc.php

JACIII Vol.28 No.2 pp. 352-360
doi: 10.20965/jaciii.2024.p0352
(2024)

Research Paper:

Estimating Tomato Plant Leaf Area Using Multiple Images from Different Viewing Angles

Nobuhiko Yamaguchi* ORCID Icon, Hiroshi Okumura*, Osamu Fukuda* ORCID Icon, Wen Liang Yeoh* ORCID Icon, and Munehiro Tanaka** ORCID Icon

*Graduate School of Science and Engineering, Saga University
1 Honjo, Saga-shi, Saga 840-8502, Japan

**Graduate School of Agriculture, Saga University
1 Honjo, Saga-shi, Saga 840-8502, Japan

Received:
July 11, 2023
Accepted:
October 31, 2023
Published:
March 20, 2024
Keywords:
tomato plant leaf area, estimation, multiple images, perspective effects
Abstract

The estimation of leaf area is an important measure for understanding the growth, development, and productivity of tomato plants. In this study, we focused on the leaf area of a potted tomato plant and proposed methods, namely, NP, D2, and D3, for estimating its leaf area. In the NP method, we used multiple tomato plant images from different viewing angles to reduce the estimation error of the leaf area, whereas in the D2 and D3 methods, we further compensated for the perspective effects. The performances of the proposed methods were experimentally assessed using 40 “Momotaro Peace” tomato plants. The experimental results confirmed that the NP method had a smaller mean absolute percentage error (MAPE) on the test set than the conventional estimation method that uses a single tomato plant image. Likewise, the D2 and D3 methods had a smaller MAPE on the test set than the conventional method that did not compensate for perspective effects.

Cite this article as:
N. Yamaguchi, H. Okumura, O. Fukuda, W. Yeoh, and M. Tanaka, “Estimating Tomato Plant Leaf Area Using Multiple Images from Different Viewing Angles,” J. Adv. Comput. Intell. Intell. Inform., Vol.28 No.2, pp. 352-360, 2024.
Data files:
References
  1. [1] D.-P. Guo and Y.-Z. Sun, “Estimation of leaf area of stem lettuce (Lactuca sativa var angustana) from linear measurements,” Indian J. of Agricultural Sciences, Vol.71, No.7, pp. 483-486, 2001.
  2. [2] H. Kücükönder, S. Boyaci, and A. Akyüz, “A modelling study with an artificial neural network: Developing estimation models for the tomato plant leaf area,” Turkish J. of Agriculture and Forestry, Vol.40, No.2, pp. 203-212, 2016. https://doi.org/10.3906/tar-1408-28
  3. [3] G. Carmassi, L. Incrocci, G. Incrocci, and A. Pardossi, “Non-destructive estimation of leaf area in tomato (Solanum lycopersicum L.) and gerbera (Gerbera jamesonii H. Bolus),” Agricoltura Mediterranea, Vol.137, pp. 172-176, 2007.
  4. [4] D. Schwarz and H.-P. Kläring, “Allometry to estimate leaf area of tomato,” J. of Plant Nutrition, Vol.24, No.8, pp. 1291-1309, 2001. https://doi.org/10.1081/PLN-100106982
  5. [5] N. Maeda, H. Suzuki, T. Kitajima, A. Kuwahara, and T. Yasuno, “Measurement of Tomato Leaf Area Using Depth Camera,” J. of Signal Processing, Vol.26, No.4, pp. 123-126, 2022. https://doi.org/10.2299/jsp.26.123
  6. [6] D. Li, L. Xu, C. Tan, E. Goodman, D. Fu, and L. Xin, “Digitization and Visualization of Greenhouse Tomato Plants in Indoor Environments,” Sensors, Vol.15, No.2, pp. 4019-4051, 2015. https://doi.org/10.3390/s150204019
  7. [7] T. Masuda, “Leaf Area Estimation by Semantic Segmentation of Point Cloud of Tomato Plants,” 2021 IEEE/CVF Int. Conf. on Computer Vision Workshops (ICCVW), pp. 1381-1389, 2021. https://doi.org/10.1109/ICCVW54120.2021.00159
  8. [8] D. Bolya, C. Zhou, F. Xiao, and Y. J. Lee, “YOLACT: Real-Time Instance Segmentation,” arXiv:1904.02689, 2019. https://doi.org/10.48550/arXiv.1904.02689
  9. [9] A. Grunnet-Jepsen and D. Tong, “Depth Post-Processing for Intell® RealSense™ D400 Depth Cameras,” Intel® RealSense™ Documentation, 2020.
  10. [10] K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask R-CNN,” IEEE Int. Conf. on Computer Vision, pp. 2980-2988, 2017. https://doi.org/10.1109/ICCV.2017.322
  11. [11] B. Russell, A. Torralba, K. Murphy, and W. Freeman, “LabelMe: A Database and Web-Based Tool for Image Annotation,” Int. J. of Computer Vision, Vol.77, Nos.1-3, pp. 157-173, 2008. https://doi.org/10.1007/s11263-007-0090-8
  12. [12] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” 2016 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 770-778, 2015.
  13. [13] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. S. Bernstein, A. C. Berg, and F.-F. Li, “ImageNet Large Scale Visual Recognition Challenge,” arXiv:1409.0575, 2014. https://doi.org/10.48550/arXiv.1409.0575
  14. [14] W. J. Smith, “Modern Optical Engineering: The Design of Optical Systems,” pp. 25-27, McGraw-Hill, 2000.
  15. [15] P. Sollich and A. Krogh, “Learning with ensembles: How overfitting can be useful,” D. Touretzky, M. Mozer, and M. Hasselmo (Eds.), Advances in Neural Information Processing Systems 8: Proc. of the 8th Int. Conf. on Neural Information Processing Systems, pp. 190-196, MIT Press, 1995.
  16. [16] L. I. Kuncheva and C. J. Whitaker, “Measures of Diversity in Classifier Ensembles and Their Relationship with the Ensemble Accuracy,” Machine Learning, Vol.51, pp. 181-207, 2003. https://doi.org/10.1023/A:1022859003006

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 22, 2024