single-rb.php

JRM Vol.19 No.1 pp. 68-76
doi: 10.20965/jrm.2007.p0068
(2007)

Paper:

Learning from Approximate Human Decisions by a Robot

Chandimal Jayawardena, Keigo Watanabe,
and Kiyotaka Izumi

Department of Advanced Systems Control Engineering, Graduate School of Science and Engineering, Saga University, 1-Honjomachi, Saga 840-8502, Japan

Received:
March 8, 2006
Accepted:
October 23, 2006
Published:
February 20, 2007
Keywords:
natural language, robot, learning, approximate decision, probabilistic neural network, natural-language command
Abstract
Robot systems operating under natural-language commands must be able to infer the meaning intended by the issuer. Despite some successful research, however, an important related aspect not yet addressed has been the possibility of learning from natural-language commands. Such commands, generated by human users, contain valuable information. The inherent subjectivity of natural language, however, complicates potential learning from such commands and their interpretation. We propose decision making for robots operating under natural-language commands influenced by human aspects of decision making. Under our proposed concept, demonstrated in experiments conducted using a robotic manipulator, the robot is controlled using natural-language commands to conduct pick-and-place operations, during which the robot builds a knowledge base. After this learning, which uses a probabilistic neural network, the robot conducts similar tasks based on approximate decisions from the knowledge gained.
Cite this article as:
C. Jayawardena, K. Watanabe, and K. Izumi, “Learning from Approximate Human Decisions by a Robot,” J. Robot. Mechatron., Vol.19 No.1, pp. 68-76, 2007.
Data files:
References
  1. [1] M. Mazo, F. J. Rodrìguez, J. L. Làzaro, J. Ureña, J. C. Garcìa, E. Santiso, and P. A. Revenga, “Electronic control of a wheelchair guided by voice commands,” Control Eng. Practice, Vol.3, No.5, pp. 665-674, 1995.
  2. [2] K. Komiya, K. Morita, K. Kagekawa, and K. Kurosu, “Guidance of a wheelchair by voice,” in Proc. of IECON 2000, pp. 102-107.
  3. [3] K. Pulasinghe, K. Watanabe, K. Kiguchi, and K. Izumi, “Modular fuzzy neural controller driven by voice commands,” in Proc. of ICCAS 2001, 2001, pp. 194-197.
  4. [4] C. T. Lin and M. C. Kan, “Adaptive fuzzy command acquisition with reinforcement learning,” IEEE Trans. on Fuzzy Systems, Vol.6, No.1, pp. 102-121, Feb. 1998.
  5. [5] K. Pulasinghe, K. Watanabe, K. Izumi, and K. Kiguchi, “A modular fuzzy-neuro controller driven by spoken language commands,” IEEE Trans. on Systems, Man, and Cybernetics-Part B: Cybernetics, Vol.34, No.1, pp. 293-302, 2004.
  6. [6] J. Feldman, G. Lakoff, D. Bailey, S. Narayanan, T. Regier, and A. Stolcke, “Lzero: The first five years,” Artific. Intell. Rev., Vol.10, pp. 103-129, 1996.
  7. [7] J. M. Lammens, “A computational model of color perception and color naming,” Ph.D. dissertation, State Univ. New York, New York, 1994.
  8. [8] T. Regier, “The Human Semantic Potential,” Cambridge, MA: MIT Press, 1996.
  9. [9] T. Regier and L. Carlson, “Grounding spatial language in perception: An empirical and computational investigation,” J. Experim. Psych., Vol.130, No.2, pp. 273-298, 2001.
  10. [10] D. Roy, “Learning words from sights and sounds: A computational model,” Ph.D. dissertation, Massachusetts Inst. Techno., Cambridge, MA, 1999.
  11. [11] D. Roy, “Grounded spoken language acquisition: Experiments in word learning,” IEEE Trans. Multimedia, Vol.5, pp. 197-209, June 2003.
  12. [12] J. Siskind, “Naive physics, event perception, lexical semantics, and language acquisition,” Ph.D. dissertation, Massachusetts Inst. Techno., Cambridge, MA, 1992.
  13. [13] D. Roy, “Grounding the lexical semantics of verbs in visual perception using force dynamics and event logic,” J. Artific. Intell. Res., Vol.15, pp. 31-90, 2001.
  14. [14] C. Yu, D. Ballard, and R. Aslin, “The role of embodied intention in early lexical acquisition,” in Proc. Cognitive Science Soc., Boston, MA, 2003.
  15. [15] D. Bailey, “When push comes to shove: A computational model of the role of motor control in the acquisition of action verbs,” Ph.D. dissertation, EECS Dept., Univ. California at Berkeley, Berkeley, CA, 1997.
  16. [16] K. Hsiao, N. Mavridis, and D. Roy, “Coupling perception and simulation: Steps toward conversational robotics,” in Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems, Las Vegas, NV, 2003.
  17. [17] S. Narayanan, “Karma: Knowledge-based active representations for metaphor and aspect,” Ph.D. dissertation, Univ. California at Berkeley, Berkeley, CA, 1997.
  18. [18] D. Roy, K. Hsiao, and N. Mavridis, “Mental imagery for a conversational robot,” IEEE Trans. on Systems, Man, and Cybernetics-Part B: Cybernetics, Vol.34, No.3, pp. 1374-1383, 2004.
  19. [19] D. F. Specht, “Probabilistic neural networks,” Neural Networks, Vol.3, No.1, pp. 109-118, 1990.
  20. [20] C. Jayawardena, K. Watanabe, and K. Izumi, “Probabilistic neural network based learning from fuzzy voice commands for controlling a robot,” in Proc. of the Intl. Conf. on Control, Automation, and Systems, Bangkok, Thailand, Aug. 2004, pp. 2011-2016.
  21. [21] K. Pulasinghe, K.Watanabe, K. Izumi, and K. Kiguchi, “Control of redundant manipulators by fuzzy linguistic commands,” in Proc. of the SICE Annual Conference 2003, Fukui, Japan, Aug. 2003, pp. 2819-2824.
  22. [22] B. J. Cain, “Improved probabilistic neural network and its performance relative to the other models,” Proceedings of SPIE, Applications of Artificial Neural Networks, Vol.1294, pp. 354-365, 1990.
  23. [23] A. Chatterjee, K. Pulasinghe, K. Watanabe, and K. Izumi, “A particle-swarm-optimized fuzzy-neural network for voicecontrolled robot systems,” IEEE Trans. on Industrial Electronics, Vol.52, No.6, pp. 1478-1489, 2005.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 19, 2024