single-au.php

IJAT Vol.15 No.2 pp. 206-214
doi: 10.20965/ijat.2021.p0206
(2021)

Paper:

Predicting Positioning Error and Finding Features for Large Industrial Robots Based on Deep Learning

Daiki Kato*,†, Kenya Yoshitsugu*, Toshiki Hirogaki*, Eiichi Aoyama*, and Kenichi Takahashi**

*Doshisha University
1-3 Tataramiyakodani, Kyotanabe, Kyoto 610-0394, Japan

Corresponding author

**IHI Corporation, Tokyo, Japan

Received:
August 27, 2020
Accepted:
January 20, 2021
Published:
March 5, 2021
Keywords:
industrial robot, positioning accuracy, deep learning, convolutional neural network
Abstract

In this study, we evaluated the motion accuracy of a large industrial robot and its compensation method and constructed an off-line teaching operation based on three-dimensional computer aided design data. In this experiment, we used a laser tracker to measure the coordinates of the end effector of the robot. Simultaneously, the end-effector coordinates, each joint angle, the maximum current of the motors attached to each joint, and rotation speed of each joint were measured. This servo information was converted into image data as visible information. For each robot movement path, an image was created; the horizontal axis represented the movement time of the robot and the vertical axis represented the servo information. A convolutional neural network (CNN), a type of deep learning, was used to predict the positioning error with high accuracy. Subsequently, to identify the features of the positioning error, the image was divided into several analysis areas, one of which was filled with various colors and analyzed by the CNN. If the prediction accuracy of the CNN decreased, then the analysis area would be identified as a feature. Thus, the features of the Y-axis positioning error were observed for teaching each joint angle in the opposite direction just after the start of the motion, overshoot of the rotational joint current, and the change in the swivel joint current.

Cite this article as:
D. Kato, K. Yoshitsugu, T. Hirogaki, E. Aoyama, and K. Takahashi, “Predicting Positioning Error and Finding Features for Large Industrial Robots Based on Deep Learning,” Int. J. Automation Technol., Vol.15 No.2, pp. 206-214, 2021.
Data files:
References
  1. [1] E. Nieves, N. Xi, X. Li, C. Martinez, and G. Zhang, “Laser beam multi-position alignment approach for an automated industrial robot calibration,” Proc. of the 4th Annual IEEE Int. Conf. on Cyber Technology in Automation, Control, and Intelligent, pp. 359-364, 2014.
  2. [2] Y. Yan, X. Yuan, H. Yang, and X. Wang, “Mechanism kinematic system of laser guiding measurement robot technology,” Proc. of the 9th Annual IEEE Int. Conf. on Electronic Measurement and Instruments, pp. 1-840-1-845, 2009.
  3. [3] N. Furuya, “Compensation of SCARA Robot Positioning Error using Stereo Camera,” J. of the Japan Society for Precision Engineering, Vol.67, pp. 1647-1657, 2001 (in Japanese).
  4. [4] J. Wu, D. Zhang, J. Liu, and X. Han, “A moment approach to positioning accuracy reliability analysis for industrial robots,” IEEE Trans. on Reliability, pp. 699-714, 2019.
  5. [5] A. Nubiola and L. A. Bonev, “Absolute calibration of an ABB IRB 1600 robot using a laser tracker,” Robotics and Computer-Integrated Manufacturing, Vol.29, pp. 236-245, 2013.
  6. [6] Y. Wang, Z. Chen, H. Zu, X. Zhang, C. Mao, and Z. Wang, “Improvement of heavy load robot positioning accuracy by combining a model-based identification for geometric parameters and an optimized neural network for the compensation of nongeometric errors,” Hindawi Complexity, Vol.2020, 5896813, 2020.
  7. [7] A. R. J. Almusawi, L. Canan, and S. Kapuchu, “A new artificial neural network approach in solving inverse kinematics of robot arm (Denso VP6242),” Hindawi Computational Intelligence and Neuroscience, Vol.2016, 5720163, 2016.
  8. [8] C. A. Janot, H. Higuchi, K. Yamada, A. Asakura, T. Hirogaki, and E. Aoyama, “Analysis of drilling conditions by a catalog mining method based on Fuzzy c-means algorithm,” Bulletin of JSME J. of Advanced Mechanical Design, Systems, and Manufacturing, Vol.14, JAMDSM0087, 2020.
  9. [9] T. Sakuma, A. Asakura, K. Yamada, T. Hirogaki, E. Aoyama, and H. Kodama, “Proposal of Data Mining Process for Tool Catalog Data Introducing Machine Learning,” Bulletin of JSME J. of Advanced Mechanical Design, Systems, and Manufacturing, Vol.85, 19-00215, 2019 (in Japanese).
  10. [10] X. Chen and F. Guhl, “Industrial robot control with object recognition based on deep learning,” Procedia CIRP, Vol.76, pp. 149-154, 2018.
  11. [11] Y. Azuma, “Deep learning for beginners,” SB Creative, pp. 102-296, 2018 (in Japanese).
  12. [12] S. Kouki, “Deep learning from scratch,” OREILLY Japan, pp. 39-239, 2016 (in Japanese).
  13. [13] L. K. Hansen and P. Salamon, “Neural network ensembles,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.12, pp. 993-1001, 1990.
  14. [14] J. Duchi, E. Hazan, and Y. Singer, “Adaptive sub gradient methods for online learning and stochastic optimization,” J. of Machine Learning Research, Vol.12, pp. 2121-2159, 2011.
  15. [15] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” J. of Machine Learning Research, Vol.15, pp. 1929-1958, 2014.
  16. [16] Z. Jianqiang, G. Xiaolin, and Z. Xuejun, “Deep convolution neural networks for Twitter sentiment analysis,” IEEE Access, Vol.6, pp. 23253-23260, 2018.
  17. [17] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep Convolutional Neural Networks,” NIPS Proc., pp. 1097-1105, 2012.
  18. [18] M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” European Conf. on Computer Vision, pp. 818-833, 2014.
  19. [19] T. Nishime, S. Endo, N. Toma, K. Yamada, and Y. Akamine, “Feature acquisition and analysis for facial expression recognition using convolutional neural networks,” Trans. of the Japanese Society for Artificial Intelligence, Vol.32, F-H34_1-8, 2017 (in Japanese).
  20. [20] M. S. Bartlett, G. Littlewort, M. Frank, C. Lainscesek, I. Fasel, and J. Movellan, “Fully automatic facial action recognition in spontaneous behavior,” Proc. of the IEEE 7th Int. Conf. on Automatic Face and Gesture Recognition, pp. 223-230, 2006.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 19, 2024