single-au.php

IJAT Vol.15 No.5 pp. 669-677
doi: 10.20965/ijat.2021.p0669
(2021)

Paper:

Imitation Learning System Design with Small Training Data for Flexible Tool Manipulation

Harumo Sasatake*,†, Ryosuke Tasaki**, Takahito Yamashita**, and Naoki Uchiyama*

*Toyohashi University of Technology
1-1 Tempaku-cho, Toyohashi, Aichi 441-8580, Japan

Corresponding author

**Aoyama Gakuin University, Sagamihara, Japan

Received:
February 26, 2021
Accepted:
April 26, 2021
Published:
September 5, 2021
Keywords:
system integration, tool manipulation, imitation learning, deep learning, human support robot
Abstract

Population aging has become a major problem in developed countries. As the labor force declines, robot arms are expected to replace human labor for simple tasks. A robotic arm attaches a tool specialized for a task and acquires the movement through teaching by an engineer with expert knowledge. However, the number of such engineers is limited; therefore, a teaching method that can be used by non-technical personnel is necessitated. As a teaching method, deep learning can be used to imitate human behavior and tool usage. However, deep learning requires a large amount of training data for learning. In this study, the target task of the robot is to sweep multiple pieces of dirt using a broom. The proposed learning system can estimate the initial parameters for deep learning based on experience, as well as the shape and physical properties of the tools. It can reduce the number of training data points when learning a new tool. A virtual reality system is used to move the robot arm easily and safely, as well as to create training data for imitation. In this study, cleaning experiments are conducted to evaluate the effectiveness of the proposed method. The experimental results confirm that the proposed method can accelerate the learning speed of deep learning and acquire cleaning ability using a small amount of training data.

Cite this article as:
H. Sasatake, R. Tasaki, T. Yamashita, and N. Uchiyama, “Imitation Learning System Design with Small Training Data for Flexible Tool Manipulation,” Int. J. Automation Technol., Vol.15 No.5, pp. 669-677, 2021.
Data files:
References
  1. [1] A. Stoytchev, “Behavior-grounded representation of tool affordances,” Proc. of the IEEE Int. Conf. on Robotics and Automation, pp. 3060-3065, 2005.
  2. [2] J. Sinapov and A. Stoytchev, “Learning and generalization of behavior-grounded tool,” Proc. of the IEEE 6th Int. Conf. on Development and Learning, pp. 19-24, 2007.
  3. [3] V. Tikhanoff, U. Pattacini et al., “Exploring affordances and tool use on the icub,” Proc. of the 13th IEEE-RAS Int. Conf. on Humanoid Robots, pp. 130-137, 2013.
  4. [4] T. Mar, V. Tikhanoff et al., “Self-supervised learning of grasp dependent tool affordances on the iCub Humanoid robot,” Proc. of the IEEE Int. Conf. on Robotics and Automation, pp. 3200-3206, 2015.
  5. [5] T. Mar, V. Tikhanoff et al., “What Can I Do With This Tool? Self-Supervised Learning of Tool Affordances From Their 3-D Geometry,” IEEE Trans. on Cognitive and Developmental Systems, pp. 595-610, 2017.
  6. [6] H. Wicaksono and C. Sammut, “Relational Tool Use Learning by a Robot in a Real and Simulated World,” Proc. of the Australasian Conf. on Robotics and Automation, pp. 1-7, 2016.
  7. [7] C. Finn, T. Yu et al., “One-Shot Visual Imitation Learning via Meta-Learning,” Proc. of the 1st Annual Conf. on Robot Learning, pp. 357-368, 2017.
  8. [8] A. Rajeswaran, V. Kumar et al., “Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations,” Proc. of Robotics: Science and Systems (RSS), pp. 1-9, 2018.
  9. [9] D. Seita, A. Ganapathi et al., “Deep Imitation Learning of Sequential Fabric Smoothing From an Algorithmic Supervisor,” Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), pp. 9651-9658, 2020.
  10. [10] Y. Pan, C. Cheng et al., “Agile Autonomous Driving Using End-to-End Deep Imitation Learning,” Proc. of Robotics: Science and Systems, pp. 1-10, 2018.
  11. [11] C. Finn, X. Tan et al., “Deep Spatial Autoencoders for Visuomotor Learning,” Proc. of the IEEE Int. Conf. on Robotics and Automation, pp. 512-519, 2016.
  12. [12] S. Levine, C. Finn et al., “End-to-end Training of Deep Visuomotor Policies,” The J. of Machine Learning Research, Vol.17, No.1, pp. 1334-1373, 2016.
  13. [13] Y. Fujihira, T. Hanyu et al., “Gripping Force Feedback System for Neurosurgery,” Int. J. Automation Technol., Vol.8, No.1, pp. 83-94, 2014.
  14. [14] R. Kamata, R. Tamura et al., “Use of 1DOF Haptic Device for Remote-Controlled 6DOF Assembly,” Int. J. Automation Technol., Vol.8, No.3, pp. 452-459, 2014.
  15. [15] K. Hoshino, N. Igo et al., “Teleoperating System for Manipulation a Moon Exploring Robot on the Earth,” Int. J. Automation Technol., Vol.11, No.3, pp. 433-441, 2017.
  16. [16] T. Zhang, Z. McCarthy, O. Jow et al., “Deep Imitation Learning for Complex Manipulation Tasks from Virtual Reality Teleoperation,” Proc. of the IEEE Int. Conf. on Robotics and Automation, pp. 5628-5635, 2018.
  17. [17] H. Sasatake, R. Tasaki et al., “Deep Imitation Learning for Broom-Manipulation Tasks Using Small-Sized Training Data,” Proc. of the 7th Int. Conf. on Control, Decision and Information Technologies, pp. 731-738, 2020.
  18. [18] J. D. Langsfeld, A. M. Kabir et al., “Robotic Bimanual Cleaning of Deformable Objects with Online Learning of Part and Tool Models,” Proc. of the IEEE Int. Conf. on Automation Science and Engineering, pp. 626-632, 2016.
  19. [19] J. D. Langsfeld, A. M. Kabir et al., “Online Learning of Part Deformation Models in Robotic Cleaning of Compliant Objects,” Proc. of the 11th Int. Manufacturing Science and Engineering Conf., Vol.49903, V002T04A003, doi: 10.1115/MSEC2016-8663, 2016.
  20. [20] J. Hess, G. D. Tipaldi et al., “Null Space Optimization for Effective Coverage of 3d Surfaces Using Redundant Manipulators,” Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 1923-1928, 2012.
  21. [21] S. Elliott and M. Cakmak, “Robotic Cleaning Through Dirt Rearrangement Planning with Learned Transition Models,” Proc. of the IEEE Int. Conf. on Robotics and Automation, pp. 1623-1630, 2018.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 19, 2024