single-jc.php

JACIII Vol.27 No.5 pp. 876-885
doi: 10.20965/jaciii.2023.p0876
(2023)

Research Paper:

Deep Feature Fusion Classification Model for Identifying Machine Parts

Amina Batool ORCID Icon, Yaping Dai ORCID Icon, Hongbin Ma ORCID Icon, and Sijie Yin ORCID Icon

School of Automation, Beijing Institute of Technology
No.5 South Street, Zhongguancun, Haidian District, Beijing 100081, China

Corresponding author

Received:
March 20, 2023
Accepted:
May 12, 2023
Published:
September 20, 2023
Keywords:
object identification, multilayer features fusion, variance-based deep fusion, machine component classification, convolutional neural networks
Abstract

In the digital world, automatic component classification is becoming increasingly essential for industrial and logistics applications. The ability to automatically classify various machine parts, such as bolts, nuts, locating pins, bearings, plugs, springs, and washers; using computer vision is challenging for image-based object recognition and classification. Despite varying shapes and classes, components are difficult to distinguish when they appear identical in several ways–particularly in images. This paper proposes identifying machine parts by a deep feature fusion classification model (DFFCM)-variance based designed through the convolutional neural network (CNN), by extracting features and forwarding them to an AdaBoost classifier. DFFCM-v extracts multilayered features from input images, including precise information from image edges, and processes them based on variance. The resulting deep vectors with higher variance are fused using weighted feature fusion to differentiate similar images and used as input to the ensemble AdaBoost classifier for classification. The proposed DFFCM-variance approach achieves the highest accuracy of 99.52% with 341,799 trainable parameters compared with the existing CNN and one-shot learning models, demonstrating its effectiveness in distinguishing similar images of machine components and accurately classifying them.

Cite this article as:
A. Batool, Y. Dai, H. Ma, and S. Yin, “Deep Feature Fusion Classification Model for Identifying Machine Parts,” J. Adv. Comput. Intell. Intell. Inform., Vol.27 No.5, pp. 876-885, 2023.
Data files:
References
  1. [1] A. Canziani, E. Culurciello, and A. Paszke, “Evaluation of neural network architectures for embedded systems,” 2017 IEEE Int. Symp. on Circuits and Systems (ISCAS), 2017. https://doi.org/10.1109/ISCAS.2017.8050276
  2. [2] A. Krizhevsky, I. Sutskever, and G. E Hinton, “ImageNet classification with deep convolutional neural networks,” Proc. of the 25th Int. Conf. on Neural Information Processing Systems (NIPS’12), pp. 1097-1105, 2012.
  3. [3] K. He et al., “Deep residual learning for image recognition,” 2016 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 770-778, 2016. https://doi.org/10.1109/CVPR.2016.90
  4. [4] G. Huang et al., “Densely connected convolutional networks,” 2017 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 2261-2269, 2017. https://doi.org/10.1109/CVPR.2017.243
  5. [5] A. Batool et al., “Classifying the machine components: A case of deep feature fusion classification model,” 10th Int. Symp. on Computational Intelligence and Industrial Applications (ISCIIA 2022), Article No.A2-2, 2022.
  6. [6] M. Fechteler, M. Schlüter, and J. Krüger, “Prototype for enhanced product data acquisition based on inherent features in logistics,” 2016 IEEE 21st Int. Conf. on Emerging Technologies and Factory Automation (ETFA), 2016. https://doi.org/10.1109/ETFA.2016.7733655
  7. [7] M. Schlüter et al., “Vision-based identification service for remanufacturing sorting,” Procedia Manufacturing, Vol.21, pp. 384-391, 2018. https://doi.org/10.1016/j.promfg.2018.02.135
  8. [8] Q. Dong et al., “A convolution neural network for parts recognition using data augmentation,” 2018 13th World Congress on Intelligent Control and Automation (WCICA), pp. 773-777, 2018. https://doi.org/10.1109/WCICA.2018.8630451
  9. [9] E. Yildiz and F. Wörgötter, “DCNN-based screw classification in automated disassembly processes,” Proc. of the Int. Conf. on Robotics, Computer Vision and Intelligent Systems (ROBOVIS 2020), pp. 61-68, 2020. https://doi.org/10.5220/0009979900610068
  10. [10] S. Taheritanjani, J. Haladjian, and B. Bruegge, “Fine-grained visual categorization of fasteners in overhaul processes,” 2019 5th Int. Conf. on Control, Automation and Robotics (ICCAR), pp. 241-248, 2019. https://doi.org/10.1109/ICCAR.2019.8813486
  11. [11] M. E. Hossain, A. Islam, and M. S. Islam, “A proficient model to classify Bangladeshi bank notes for automatic vending machine using a tiny dataset with one-shot learning & Siamese networks,” 2020 11th Int. Conf. on Computing, Communication and Networking Technologies (ICCCNT), 2000. https://doi.org/10.1109/ICCCNT49239.2020.9225405
  12. [12] C. Tastimur and E. Akin, “Fastener classification using one-shot learning with Siamese convolution networks,” J. of Universal Computer Science, Vol.28, No.1, pp. 80-97, 2022. https://doi.org/10.3897/jucs.70484
  13. [13] F. Ali et al., “A smart healthcare monitoring system for heart disease prediction based on ensemble deep learning and feature fusion,” Information Fusion, Vol.63, pp. 208-222, 2020. https://doi.org/10.1016/j.inffus.2020.06.008
  14. [14] L. Yang et al., “Part-based convolutional neural network for visual recognition,” 2017 IEEE Int. Conf. on Image Processing (ICIP), pp. 1772-1776, 2017. https://doi.org/10.1109/ICIP.2017.8296586
  15. [15] H. Cai et al., “Feature-level fusion approaches based on multimodal EEG data for depression recognition,” Information Fusion, Vol.59, pp. 127-138, 2020. https://doi.org/10.1016/j.inffus.2020.01.008
  16. [16] Z. Kang et al., “An automatic garbage classification system based on deep learning,” IEEE Access, Vol.8, pp. 140019-140029, 2020. https://doi.org/10.1109/ACCESS.2020.3010496
  17. [17] H. Fradi, A. Fradi, and J.-L. Dugelay, “Multi-layer feature fusion and selection from convolutional neural networks for texture classification,” Proc. of the 16th Int. Joint Conf. on Computer Vision, Imaging and Computer Graphics Theory and Applications, Vol.4, pp. 574-581, 2021. https://doi.org/10.5220/0010388105740581
  18. [18] A. Caglayan and A. B. Can, “Exploiting multi-layer features using a CNN-RNN approach for RGB-D object recognition,” Computer Vision – Proc. of the ECCV 2018 Workshops, Part 3, pp. 675-688, 2018. https://doi.org/10.1007/978-3-030-11015-4_51
  19. [19] Y. Pan et al., “Multi-classifier information fusion in risk analysis,” Information Fusion, Vol.60, pp. 121-136, 2020. https://doi.org/10.1016/j.inffus.2020.02.003
  20. [20] D. Zhang et al., “A weighted fusion algorithm for multi-sensors,” 2016 6th Int. Conf. on Instrumentation & Measurement, Computer, Communication and Control (IMCCC), pp. 808-811, 2016. https://doi.org/10.1109/IMCCC.2016.204
  21. [21] C. Zhao, “Research on multiband packet fusion algorithm for hyperspectral remote sensing images,” J. Adv. Comput. Intell. Intell. Inform., Vol.23, No.1, pp. 153-157, 2019. https://doi.org/10.20965/jaciii.2019.p0153
  22. [22] T.-K. An and M.-H. Kim, “A new diverse AdaBoost classifier,” 2010 Int. Conf. on Artificial Intelligence and Computational Intelligence, pp. 359-363, 2010. https://doi.org/10.1109/AICI.2010.82
  23. [23] D. Opitz and R. Maclin, “Popular ensemble methods: An empirical study,” J. of Artificial Intelligence Research, Vol.11, No.1, pp. 169-198, 1999.
  24. [24] R. Caruana and A. Niculescu-Mizil, “An empirical comparison of supervised learning algorithms,” Proc. of the 23rd Int. Conf. on Machine Learning (ICML’06), pp. 161-168, 2006. https://doi.org/10.1145/1143844.1143865

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Oct. 01, 2024