single-rb.php

JRM Vol.16 No.5 pp. 526-534
doi: 10.20965/jrm.2004.p0526
(2004)

Paper:

Communication Interface for Human-Robot Partnership

Naoyuki Kubota*, and Yosuke Urushizaki**

*Dept. of Mechanical Engineering, Tokyo Metropolitan University, PRESTO, Japan Science and Technology Corporation, 1-1 Minami-Osawa, Hachioji, Tokyo 192-0397, Japan

**Dept. of Human and Artificial Intelligent Systems, University of Fukui, 3-9-1 Bunkyo, Fukui 910-8507, Japan

Received:
May 26, 2004
Accepted:
June 28, 2004
Published:
October 20, 2004
Keywords:
mobile robot, perceptual system, map building, reinforcement learning, interface
Abstract
This paper deals with learning for a person/robot/computer agent partnership communication interface. Although it is difficult for robots to learn behavior through interaction with people based on human intention in an actual environment, the robot easily obtains environmental information using sensors. Learning in computer simulation is relatively easy because contact patterns are restricted in the virtual environment, but the computer agent cannot collect environmental information on people. Robot and computer agents thus play different roles. Interface design is vital to computer agent, because people is interfaced to the computer’s virtual environment. Human intention should be extracted through communication with the computer agent in the virtual environment. In this study, we consider interaction between a robot and a person through a computer agent, and the task given to the person is to guide the robot to a target point based on human intention. For this, we use a computer agent, assuming it gets energy at a specific point in the virtual environment. We propose a method for extracting human intentions using multiple state-value functions. A state-value function is selected based on a human tapping pattern on the PDA used as an interface to the computer agent, and is updated by a reinforcement learning algorithm based on a reward. Experimental results demonstrate the effectiveness of the proposed method.
Cite this article as:
N. Kubota and Y. Urushizaki, “Communication Interface for Human-Robot Partnership,” J. Robot. Mechatron., Vol.16 No.5, pp. 526-534, 2004.
Data files:
References
  1. [1] D. Gentner, and A. L. Stevens, “Mental models,” LEA, 1983.
  2. [2] D. A. Norman, “The Design of Everyday Things,” Basic Books, 1988.
  3. [3] J. Raskin, “The Human Interface,” Addison-Wesley, 2000.
  4. [4] U. Neisser, “Cognition and Reality,” W. H. Freeman and Company, 1976.
  5. [5] J. J. Gibson, “The Ecological Approach to Visual Perception,” LEA, 1986.
  6. [6] M. T. Turvey, and R. E. Shaw, “Ecological Foundations of Cognition I. Symmetry and Specificity of Animal-Environment Systems,” Journal of Consciousness Studies 6, No.11-12, pp. 95-110, 1999.
  7. [7] A. M. Turing, “Computing Machinery and Intelligence,” Mind, Vol.59, pp. 433-466, 1950.
  8. [8] R. A. Brooks, “Cambrian Intelligence,” The MIT Press, 1999.
  9. [9] R. Pfeifer, and C. Scheier, “Understanding Intelligence,” The MIT Press, 1999.
  10. [10] S. J. Russell, and P. Norvig, “Artificial Intelligence,” Prentice-Hall, Inc., 1995.
  11. [11] R. S. Sutton, and A. G. Barto, “Reinforcement Learning,” The MIT Press, 1998.
  12. [12] J.-S. R. Jang, C.-T. Sun, and E. Mizutani, “Neuro-Fuzzy and Soft Computing,” Prentice-Hall, Inc., 1997.
  13. [13] T. Kohonen, “Self-Organization and Associative Memory,” Springer, 1984.
  14. [14] L. Liao, D. Fox, J. Hightower, H. Kautz, and D. Shulz: Voronoi Tracking, “Location Estimation Using Sqarse and Noisy Sensor Data,” Proc. 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 723-728, 2003.
  15. [15] K. Tanaka, H. Zha, and T. Hasegawa, “Viewpoint Planning in Map Updating Task for Improving Utility of a Map,” Proc. of 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 729-734, 2003.
  16. [16] Y. Endo, and R. C. Arkin, “Anticipatory Robot Navigation by Simultaneously Localizing and Building a Cognitive Map,” Proc. of 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 460-466, 2003.
  17. [17] M. Tomono, and S. Yuta, “Object-based Localization and Mapping using Loop Constraints and Geometric Prior Knowledge,” Proc. of Internation Conference on Robotics and Automation, pp. 862-867, 2003.
  18. [18] J. Guivant, and E. Nebot, “Optimization of the Simultaneous Localization and Map Building Algorithm for Real Time Implementation,” IEEE Trans. on Robotic and Automation, Vol.17, No.3, pp. 242-257, 2001.
  19. [19] T. Fukuda, and N. Kubota, “An Intelligent Robotic System Based on A Fuzzy Approach,” Proc. of IEEE, Vol.87, No.9, pp. 1448-1470, 1999.
  20. [20] N. Kubota, D. Hisajima, F. Kojima, and T. Fukuda, “Fuzzy and Neural Computing for Communication of a Partner Robot,” Journal of Multi-Valued Logic and Soft Computing, Vol.9, pp. 221-239, 2003.
  21. [21] N. Kubota, and Y. Urushizaki, “Multiple Value Functions for A Computer Agent Interacting with Human,” Proc. of the 34th Annual Conference of the International Simulation And Gaming Association (ISAGA), pp. 761-770, 2003.
  22. [22] K. S. Fu, R. C. Gonzalez, and C. S. G. Lee, “Robotics,” McGraw-Hill Book Company, 1987.
  23. [23] M. W. Spong, and M. Vidyasagar, “Robot Dynamics and Control,” John Wiley & Sons, 1989.
  24. [24] M. Brady, and R. Paul, Robotics Research, “The First International Symposium,” The MIT Press, Massachusetts, 1984.
  25. [25] N. Kubota, T. Arakawa, and T. Fukuda, “Trajectory Planning and Learning of A Redundant Manipulator with Structured Intelligence,” Journal of The Brazilian Computer Society, Vol.4, No.3, pp. 14-26, 1998.
  26. [26] K. Kawai, A. Ishiguro, and P. Eggenberger, “Incremental Evolution of Neurocontrollers with a Diffusion-Reaction Mechanism of Neuromodulators,” in Proc. (CD-ROM) of IEEE/RSJ International Conference on Intelligent Robots and Systems, 2001.
  27. [27] H. Kimura, M. Yamamura, and S. Kobayashi, “Reinforcement Learning by Stochastic Hill Climbing on Discounted Reward,” Proceedings of the 12th International Conference on Machine Learning, pp. 295-303, 1995.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 19, 2024