Research Paper:
Automatic Characterization of WEDM Single Craters Through AI Based Object Detection
Eduardo Gonzalez-Sanchez, Davide Saccardo
, Paulo Borges Esteves
, Michal Kuffa, and Konrad Wegener
Eidgenössische Technische Hochschule (ETH) Zürich
Technoparkstrasse 1, Zürich 8005, Switzerland
Corresponding author
Wire electrical discharge machining (WEDM) is a process that removes material from conductive workpieces by using sequential electrical discharges. The morphology of the craters formed by these discharges is influenced by various process parameters and affects the quality and efficiency of the machining. To understand and optimize the WEDM process, it is essential to identify and characterize single craters from microscopy images. However, manual labeling of craters is tedious and prone to errors. This paper presents a novel approach to detect and segment single craters using state-of-the-art computer vision techniques. The YOLOv8 model, a convolutional neural network-based object detection technique, is fine-tuned on a custom dataset of WEDM craters to locate and enclose them with tight bounding boxes. The segment anything model, a vision transformer-based instance segmentation technique, is applied to the cropped images of individual craters to delineate their shape and size. Geometric analysis of the segmented craters reveals significant variations in their contour and area depending on the energy setting, while the wire diameter has minimal influence.
- [1] P. Esteves, M. Sikora, M. Kuffa, and K. Wegener, “Single crater dimensions and wire diameter influence on Wire-EDM,” Procedia CIRP, Vol.113, pp. 232-237, 2022. https://doi.org/10.1016/j.procir.2022.09.151
- [2] J. Gu, Z. Wang, J. Kuen, L. Ma, A. Shahroudy, B. Shuai, T. Liu, X. Wang, G. Wang, J. Cai, and T. Chen, “Recent advances in convolutional neural networks,” Pattern Recognition, Vol.77, pp. 354-377, 2018. https://doi.org/10.1016/j.patcog.2017.10.013
- [3] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” Proc. of the 31st Int. Conf. on Neural Information Processing Systems (NIPS’17), pp. 6000-6010, 2017.
- [4] H. Matsumoto, Y. Mori, and H. Masuda, “Extraction of guardrails from MMS data using convolutional neural network,” Int. J. Automation Technol., Vol.15, No.3, pp. 258-267, 2021. https://doi.org/10.20965/ijat.2021.p0258
- [5] S. Yamane and K. Matsuo, “Gap detection using convolutional neural network and adaptive control in robotic plasma welding,” Int. J. Automation Technol., Vol.13, No.6, pp. 796-802, 2019. https://doi.org/10.20965/ijat.2019.p0796
- [6] S. Qi, J. Yang, and Z. Zhong, “A review on industrial surface defect detection based on deep learning technology,” Proc. of the 2020 3rd Int. Conf. on Machine Learning and Machine Intelligence (MLMI’20), pp. 24-30, 2020. https://doi.org/10.1145/3426826.3426832
- [7] H. Wu, M. J. Triebe, and J. W. Sutherland, “A transformer-based approach for novel fault detection and fault classification/diagnosis in manufacturing: A rotary system application,” J. of Manufacturing Systems, Vol.67, pp. 439-452, 2023. https://doi.org/10.1016/j.jmsy.2023.02.018
- [8] R. Angelone, A. Caggiano, R. Teti, A. Spierings, A. Staub, and K. Wegener, “Bio-intelligent selective laser melting system based on convolutional neural networks for in-process fault identification,” Procedia CIRP, Vol.88, pp. 612-617, 2020. https://doi.org/10.1016/j.procir.2020.05.107
- [9] G. Jocher, “YOLOv8.” https://github.com/ultralytics/ultralytics [Accessed June 15, 2023]
- [10] A. Kirillov, E. Mintun, N. Ravi, H. Mao, C. Rolland, L. Gustafson, T. Xiao, S. Whitehead, A. C. Berg, W.-Y. Lo, P. Dollár, and R. Girshick, “Segment anything,” arXiv:2304.02643, 2023. https://doi.org/10.48550/arXiv.2304.02643
- [11] K. Morimoto and M. Kunieda, “Sinking EDM simulation by determining discharge locations based on discharge delay time,” CIRP Annals, Vol.58, No.1, pp. 221-224, 2009. https://doi.org/10.1016/j.cirp.2009.03.069
- [12] R. Feynman, R. Leighton, and M. Sands, “6-11: High-voltage breakdown,” R. Feynman, R. Leighton, and M. Sands (Eds.), “The Feynman Lectures on Physics,” Vol.2, Addison-Wesley, 1964.
- [13] I. Giannakis, A. Bhardwaj, L. Sam, and G. Leontidis, “Deep learning universal crater detection using Segment Anything Model (SAM),” arXiv:2304.07764, 2023. https://doi.org/10.48550/arXiv.2304.07764
- [14] D. M. DeLatte, S. T. Crites, N. Guttenberg, and T. Yairi, “Automated crater detection algorithms from a machine learning perspective in the convolutional neural network era,” Advances in Space Research, Vol.64, No.8, pp. 1615-1628, 2019. https://doi.org/10.1016/j.asr.2019.07.017
- [15] D. M. DeLatte, S. T. Crites, N. Guttenberg, E. J. Tasker, and T. Yairi, “Segmentation convolutional neural networks for automatic crater detection on Mars,” IEEE J. of Selected Topics in Applied Earth Observations and Remote Sensing, Vol.12, No.8, pp. 2944-2957, 2019. https://doi.org/10.1109/JSTARS.2019.2918302
- [16] R. La Grassa, G. Cremonese, I. Gallo, C. Re, and E. Martellato, “YOLOLens: A deep learning model based on super-resolution to enhance the crater detection of the planetary surfaces,” Remote Sensing, Vol.15, No.5, Article No.1171, 2023. https://doi.org/10.3390/rs15051171
- [17] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” 2016 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 779-788, 2016. https://doi.org/10.1109/CVPR.2016.91
- [18] Y. Wu, A. Kirillov, F. Massa, W.-Y. Lo, and R. Girshick, “Detectron2,” 2019. https://github.com/facebookresearch/detectron2 [Accessed June 15, 2023]
- [19] K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask R-CNN,” 2017 IEEE Int. Conf. on Computer Vision (ICCV), pp. 2980-2988, 2017. https://doi.org/10.1109/ICCV.2017.322
This article is published under a Creative Commons Attribution-NoDerivatives 4.0 Internationa License.