IJAT Vol.3 No.6 pp. 671-680
doi: 10.20965/ijat.2009.p0671


Operational “Feel” Adjustment by Reinforcement Learning for a Power-Assisted Positioning Task

Tetsuya Morizono*, Yoji Yamada**, and Masatake Higashi***

*Department of Information and Systems Engineering, Fukuoka Institute of Technology, 3-30-1 Wajiro-Higashi, Higashi-ku, Fukuoka 811-0295, Japan

**Department of Mechanical Science and Engineering, Graduate School of Engineering, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8603, Japan

***Department of Advanced Science and Technology, Toyota Technological Institute, 2-12-1 Hisakata, Tempaku-ku, Nagoya 468-8511, Japan

June 16, 2009
August 4, 2009
November 5, 2009
power-assist robot, positioning task, operational feel, reinforcement learning, multiple goals
Controlling “feel” when operating a power-assist robot is important for improving robot operability, user satisfaction, and task performance efficiency. Autonomous adjustment of “feel” is considered with robots under impedance control, and reinforcement learning in adjustment when a task includes repetitive positioning is discussed. Experimental results demonstrate that an operational “feel” pattern appropriate for positioning at a goal is developed by adjustment. Adjustment assuming a single fixed goal is expanded to cases including multiple goals, in which it is assumed that one goal is chosen by a user in real time. To adjust operational “feel” to individual goals, an algorithm infers the goal. The same result as that for a single fixed goal is obtained in experiments, but experimental results suggest that design must be improved to where the accuracy of inference to the goal is taken into account by the adjustment learning algorithm.
Cite this article as:
T. Morizono, Y. Yamada, and M. Higashi, “Operational “Feel” Adjustment by Reinforcement Learning for a Power-Assisted Positioning Task,” Int. J. Automation Technol., Vol.3 No.6, pp. 671-680, 2009.
Data files:
  1. [1] N. Hogan, “Impedance Control: An Approach to Manipulation, Part I-III,” ASME J. of Dynamic Systems, Measurement, and Control, Vol.107, pp. 1-24, 1985.
  2. [2] R. Ikeura and H. Inoue, “Variable Impedance Control of a Robot for Cooperation with a Human,” Proc. 1995 IEEE Int. Conf. on Robotics and Automation, pp. 3097-3102, 1995.
  3. [3] O. M. Al-Jarrah and Y. F. Zheng, “Arm-Manipulator Coordination for Load Sharing Using Variable Compliance Control,” Proc. 1997 IEEE Int. Conf. on Robotics and Automation, pp. 895-900, 1997.
  4. [4] T. Tsumugiwa, R. Yokogawa, and K. Hara, “Variable Impedance Control Based on Estimation of Human Arm Stiffness for Human-Robot Cooperative Calligraphic Task,” Proc. 2002 IEEE Int. Conf. on Robotics and Automation, pp. 644-650, 2002.
  5. [5] T. Tsuji and Y. Tanaka, “Tracking Control Properties of Human-Robotic Systems Based on Impedance Control,” IEEE T. on Systems, Man, and Cybernetics: Part A, Vol.35, No.4, pp. 523-535, 2005.
  6. [6] Y. Yamada, H. Konosu, T. Morizono, and Y. Umetani, “Proposal of Skill-Assist: A System of Assisting Human Workers by Reflecting Their Skills in Positioning Tasks,” Proc. IEEE Int. Conf. on Systems, Man, and Cybernetics, pp. IV-11-16, 1999.
  7. [7] Y. Yamada, Y. Umetani, H. Daitoh et al., “Construction of a Human/Robot Coexistence System Based on A Model of Human Will - Intention and Desire,” Proc. 1999 IEEE Int. Conf. on Robotics and Automation, pp. 2861-2867, 1999
  8. [8] Richard S. Sutton and Andrew G. Barto, “Reinforcement Learning: An Introduction,” pp. 145-147, The MIT Press, 1998.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Jul. 12, 2024