Research Paper:
Generation of Reaching Motions for Flat Cable Insertion Task Using Simulation Learning and Domain Adaptation for Industrial Robots
Yuki Yamaguchi*,, Shinsuke Nakashima*
, Hiroki Murakami**, Tetsushi Nakai**, Qi An*
, and Atsushi Yamashita*

*The University of Tokyo
5-1-5 Kashiwanoha, Kashiwa, Chiba 277-8563, Japan
Corresponding author
**NACHI-FUJIKOSHI Corp.
Tokyo, Japan
In this paper, we propose a method for generating reaching motions for the insertion of flat cables. Despite the need to insert flat cables into sockets in the circuit assembly of various electronic devices, there has been little research on automating the insertion flat cables that are already fixed on one side to the circuit board. In this regard, we focus on the generation of reaching motions in the posture for grasping such flat cables. Our method uses deep reinforcement learning in a simulation environment, and the features extracted from the image and the pose of the manipulator are used as states. For the transfer from the simulation environment to the real-world environment, we use a CycleGAN-based domain adaptation method. We conducted experiments under several different conditions in a real-world environment to verify operation of the trained agent. The results demonstrated that the success rate of the generated reaching motions exceeded 70% under all conditions.
- [1] Y. Asano, H. Wakamatsu, E. Morinaga, E. Arai, and S. Hirai, “Deformation Path Planning for Manipulation of Flexible Circuit Boards,” Proc. of the 2010 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 5386-5391, 2010. https://doi.org10.1109/IROS.2010.5651132
- [2] G. Schoettler, A. Nair, J. Luo, S. Bahl, J. Aparicio Ojea, E. Solowjow, and S. Levine, “Deep Reinforcement Learning for Industrial Insertion Tasks with Visual Inputs and Natural Rewards,” Proc. of the 2020 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), pp. 5548-5555, 2020. https://doi.org10.1109/IROS45743.2020.9341714
- [3] J. Li, H. Shi, and K.-S. Hwang, “Using Goal-Conditioned Reinforcement Learning with Deep Imitation to Control Robot Arm in Flexible Flat Cable Assembly Task,” IEEE Trans. on Automation Science and Engineering, 2023. https://doi.org10.1109/TASE.2023.3323307
- [4] C. Ying, Y. Mo, Y. Matsuura, and K. Yamazaki, “Pose Estimation of a Small Connector Attached to the Tip of a Cable Sticking Out of a Circuit Board,” Int. J. Automation Technol., Vol.16, No.2, pp. 208-217, 2022. https://doi.org10.20965/ijat.2022.p0208
- [5] J. Chapman, G. Gorjup, A. Dwivedi, S. Matsunaga, T. Mariyama, B. MacDonald, and M. Liarokapis, “A Locally-Adaptive, Parallel-Jaw Gripper with Clamping and Rolling Capable, Soft Fingertips for Fine Manipulation of Flexible Flat Cables,” Proc. of the 2021 IEEE Int. Conf. on Robotics and Automation (ICRA), pp. 6941-6947, 2021. https://doi.org10.1109/ICRA48506.2021.9561970
- [6] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired Image-To-Image Translation Using Cycle-Consistent Adversarial Networks,” Proc. of the IEEE Int. Conf. on Computer Vision, pp. 2223-2232, 2017.
- [7] T. Cui, R. Song, F. Li, T. Fu, C. Wang, and Y. Li, “Fast Recognition of Snap-Fit for Industrial Robot Using a Recurrent Neural Network,” IEEE Robotics and Automation Letters, Vol.8, No.3, pp. 1635-1642, 2023. https://doi.org10.1109/LRA.2022.3209161
- [8] H.-C. Cho, Y.-L. Kim, B.-S. Kim, and J.-B. Song, “A Strategy for Connector Assembly using Impedance Control for Industrial Robots,” Proc. of the 2012 12th Int. Conf. on Control, Automation and Systems, pp. 1433-1435, 2012.
- [9] H.-C. Song, Y.-L. Kim, D.-H. Lee, and J.-B. Song, “Electric connector assembly based on vision and impedance control using cable connector-feeding system,” J. of Mechanical Science and Technology, Vol.31, No.12, pp. 5997-6003, 2017. https://doi.org10.1007/s12206-017-1144-7
- [10] J. Luo, O. Sushkov, R. Pevceviciute, W. Lian, C. Su, M. Vecerik, N. Ye, S. Schaal, and J. Scholz, “Robust Multi-Modal Policies for Industrial Assembly via Reinforcement Learning and Demonstrations: A Large-Scale Study,” arXiv preprint, arXiv:2103.11512, 2021. https://doi.org10.48550/arXiv.2103.11512
- [11] Y. She, S. Wang, S. Dong, N. Sunil, A. Rodriguez, and E. Adelson, “Cable manipulation with a tactile-reactive gripper,” The Int. J. of Robotics Research, Vol.40, Nos.12-14, pp. 1385-1401, 2021. https://doi.org10.1177/02783649211027233
- [12] J. Buzzatto, J. Chapman, M. Shahmohammadi, F. Sanches, M. Nejati, S. Matsunaga, R. Haraguchi, T. Mariyama, B. MacDonald, and M. Liarokapis, “On Robotic Manipulation of Flexible Flat Cables: Employing a Multi-Modal Gripper with Dexterous Tips, Active Nails, and a Reconfigurable Suction Cup Module,” Proc. of the 2022 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), pp. 1602-1608, 2022. https://doi.org10.1109/IROS47612.2022.9981313
- [13] Y. Kitaaki, R. Haraguchi, K. Shiratsuchi, Y. Domae, H. Okuda, A. Noda, K. Sumi, T. Fukuda, S. Kaneko, and T. Matsuno, “A Robotic Assembly System Capable of Handling Flexible Cables with Connector,” Proc. of the 2011 IEEE Int. Conf. on Mechatronics and Automation, pp. 893-897, 2011. https://doi.org10.1109/ICMA.2011.5985708
- [14] K. Sumi, “Development of Production Robot System that can Handle Flexible Goods ‘Project for Strategic Development of Advanced Robot Element Technologies / Robot Assembly System for FA Equipment’,” Proc. of the 2009 IEEE Workshop on Advanced Robotics and Its Social Impacts, pp. 42-46, 2009. https://doi.org10.1109/ARSO.2009.5587079
- [15] H. Zhou, S. Li, Q. Lu, and J. Qian, “A Practical Solution to Deformable Linear Object Manipulation: A Case Study on Cable Harness Connection,” Proc. of the 2020 5th Int. Conf. on Advanced Robotics and Mechatronics (ICARM), pp. 329-333, 2020. https://doi.org10.1109/ICARM49381.2020.9195380
- [16] D. Yarats, A. Zhang, I. Kostrikov, B. Amos, J. Pineau, and R. Fergus, “Improving Sample Efficiency in Model-Free Reinforcement Learning from Images,” Proc. of the AAAI Conf. on Artificial Intelligence, Vol.35, No.12, pp. 10674-10681, 2021. https://doi.org10.1609/aaai.v35i12.17276
- [17] T. Lesort, N. Díaz-Rodríguez, J.-F. Goudou, and D. Filliat, “State Representation Learning for Control: An Overview,” Neural Networks, Vol.108, pp. 379-392, 2018. https://doi.org10.1016/j.neunet.2018.07.006
- [18] D. P. Kingma and M. Welling, “Auto-Encoding Variational Bayes,” arXiv preprint, arXiv:1312.6114, 2022. https://doi.org10.48550/arXiv.1312.6114
- [19] A. Béres and B. Gyires-Tóth, “Enhancing Visual Domain Randomization with Real Images for Sim-to-Real Transfer,” Infocommunications J., Vol.15, No.1, pp. 15-25, 2023. https://doi.org10.36244/ICJ.2023.1.3
- [20] C. Rizzardo, F. Chen, and D. Caldwell, “Sim-to-real via latent prediction: Transferring visual non-prehensile manipulation policies,” Frontiers in Robotics and AI, Vol.9, Article No.1067502, 2023. https://doi.org10.3389/frobt.2022.1067502
- [21] D. Ha and J. Schmidhuber, “World Models,” arXiv preprint, arXiv:1803.10122, 2018. https://doi.org10.5281/arXiv:1803.10122
- [22] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine, “Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor,” Proc. of the 2018 Int. Conf. on Machine Learning, 2018. https://doi.org10.48550/arXiv.1801.01290
- [23] K. Rao, C. Harris, A. Irpan, S. Levine, J. Ibarz, and M. Khansari, “RL-CycleGAN: Reinforcement Learning Aware Simulation-to-Real,” Proc. of the 2020 IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 11154-11163, 2020. https://doi.org10.1109/CVPR42600.2020.01117
- [24] D. Ho, K. Rao, Z. Xu, E. Jang, M. Khansari, and Y. Bai, “RetinaGAN: An Object-aware Approach to Sim-to-Real Transfer,” Proc. of the 2021 IEEE Int. Conf. on Robotics and Automation (ICRA), pp. 10920-10926, 2021. https://doi.org10.1109/ICRA48506.2021.9561157
- [25] E. Todorov, T. Erez, and Y. Tassa, “MuJoCo: A physics engine for model-based control,” Proc. of the 2012 IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 5026-5033, 2012. https://doi.org10.1109/IROS.2012.6386109
This article is published under a Creative Commons Attribution-NoDerivatives 4.0 Internationa License.