single-rb.php

JRM Vol.17 No.5 pp. 596-604
doi: 10.20965/jrm.2005.p0596
(2005)

Paper:

Autonomous Role Assignment in a Homogeneous Multi-Robot System

Toshiyuki Yasuda*, and Kazuhiro Ohkura**

*Graduate School of Science and Technology, Kobe University, 1-1 Rokkodai, Nada, Kobe 657-8501, Japan

**Dept. of Mechanical Engineering, Faculty of Engineering, Kobe University, 1-1 Rokkodai, Nada, Kobe 657-8501, Japan

Received:
May 16, 2005
Accepted:
August 22, 2005
Published:
October 20, 2005
Keywords:
multi-robot system, autonomous specialization, reinforcement learning
Abstract
This paper describes an approach for controlling an autonomous homogeneous multi-robot system. An extremely important topic for this type of system is the design of an on-line autonomous behavior acquisition mechanism that is capable of developing cooperative roles as well as assigning them to a robot appropriately in a noisy embedded environment. Our approach applies reinforcement learning that adopts the Bayesian discrimination method for segmenting a continuous state space and a continuous action space simultaneously. In addition, a neural network is provided for predicting the average of the other robots’ postures at the next time step in order to stabilize the reinforcement learning environment. The proposed method is validated through computer simulations as well as our hand-made multi-robot system.
Cite this article as:
T. Yasuda and K. Ohkura, “Autonomous Role Assignment in a Homogeneous Multi-Robot System,” J. Robot. Mechatron., Vol.17 No.5, pp. 596-604, 2005.
Data files:
References
  1. [1] P. Stone, and R. S. Sutton, “Scaling Reinforcement Learning toward RoboCup Soccer,” Proc. of the 18th International Conference on Machine Learning, pp. 537-544, 2001.
  2. [2] F. Mondada, A. Guignard, M. Bonani, D. Floreano, D. Bär, and M. Lauria, “SWARM-BOT: From Concept to Implementation,” Proc. of IEEE/RSJ International Conference on Intelligent Robot and Systems, pp. 1626-1631, 2003.
  3. [3] B. Gerkey, and M. J. Matarić, “Pusher-Watcher: An Approach to Fault-Tolerant Tightly-Coupled Robot Coordination,” Proc. of IEEE International Conference on Robotics and Automation, pp. 464-469, 2002.
  4. [4] P. Stone, and M. Veloso, “Multiagent systems: survey from a machine learning perspective,” Autonomous Robots, 8(3): pp. 345-383, 2000.
  5. [5] M. Quinn, L. Smith, G. Mayley, and P. Husbands, “Evolving Team Behavior for Real Robots,” Proc. of EPSRC/BBSRC International Workshop on Biologically-Inspired Robotics, pp. 217-224, 2002.
  6. [6] G. Baldassarre, S. Nolfi, and D. Parisi, “Evolution of collective behaviour in a team of physically linked robots,” Applications of Evolutionary Computing, pp. 581-592, Springer Verlag, Heidelberg, Germany, 2003.
  7. [7] R. S. Sutton, and A. G. Barto, “Reinforcement Learning: An Introduction,” MIT Press, 1998.
  8. [8] K. Kosuge, and M. Sato, “Transportation of a Single Object by Multiple Decentralized-Controlled Nonholonomic Mobile Robots,” Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1681-1686, 1999.
  9. [9] L. Chaimowicz, T. Sugar, V. Kumar, and M. Campos, “An Architecture for Tightly Coupled Multi-Robot Cooperation,” Proc. of IEEE International Conference on Robotics and Automation, pp. 2292-2297, 2001.
  10. [10] R. Ghanea-Hercock, and D. P. Barnes, “Evolved Fuzzy Control System for Cooperation,” International Journal of Advanced Robotics, Special Issue on Learning and Behaviors in Robotics, pp. 599-607, 1996.
  11. [11] R. S. Sutton, “Generalization in reinforcement learning: Successful examples using sparse coarse coding,” Advances in Neural Information Processing Systems, 8: pp. 1038-1044, 1996.
  12. [12] J. Morimoto, and K. Doya, “Acquisition of Stand-Up Behavior by a Real Robot using Hierarchical Reinforcement Learning for Motion Learning: Learning “Stand Up” Trajectories,” Proc. of International Conference on Machine Learning, pp. 623-630, 2000.
  13. [13] L. J. Lin, “Scaling Up Reinforcement Learning for Robot Control,” Proc. of the 10th International Conference on Machine Learning, pp. 182-189, 1993.
  14. [14] M. Asada, S. Noda, and K. Hosoda, “Action-Based Sensor Space Categorization for Robot Learning,” Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1502-1509, 1996.
  15. [15] Y. Takahashi, M. Asada, and K. Hosoda, “Reasonable Performance in Less Learning Time by Real Robot Based on Incremental State Space Segmentation,” Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1502-1524, 1996.
  16. [16] M. M. Svinin, F. Kojima, Y. Katada, and K. Ueda, “Initial Experiments on Reinforcement Learning Control of Cooperative Manipulations,” Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 416-422, 2000.
  17. [17] R. O. Duda, and P. E. Hart, “Pattern Classification and Scene Analysis,” Wiley-Interscience, N.Y., 1972.
  18. [18] M. Tan, “Multi-Agent Reinforcement Learning: Independent vs. Cooperative Agents,” Proc. of the Tenth International Conference on Machine Learning, pp. 330-337, 1993.
  19. [19] M. Asada, E. Uchibe, and K. Hosoda, “Cooperative Behavior Acquisition for Mobile Robots in Dynamically Changing Real Worlds via Vision-Based Reinforcement Learning and Development,” Artificial Intelligence, 110: pp. 275-292, 1999.
  20. [20] S. Ikenoue, M. Asada, and K. Hosoda, “Cooperative Behavior Acquisition by Asynchronous Policy Renewal that Enables Simultaneous Learning in Multiagent Environment,” Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2728-2734, 2002.
  21. [21] S. Elfwing, E. Uchibe, K. Doya, and H. I. Christensen, “Multi-Agent Reinforcement Learning: Using Macro Actions to Learn a Mating Task,” Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3164-3169, 2004.
  22. [22] M. L. Littman, “Markov Games as a Framework for Multi-Agent Reinforcement Learning,” Proc. of Eleventh International Conference on Machine Learning, pp. 157-163, 1994.
  23. [23] J. Hu, and M. P. Wellman, “Multiagent Reinforcement Learning: Theoretical Framework and an Algorithm,” Proc. of Fifteenth International Conference on Machine Learning, pp. 242-250, 1998.
  24. [24] Y. Nagayuki, S. Ishii, and K. Doya, “Multi-Agent Reinforcement Learning: An Approach Based on the Other Agent’s Internal Model,” Proc. of Fourth International Conference on Multi-Agent Systems, pp. 215-221, 2000.
  25. [25] A. W. Moore, and C. G. Atkeson, “Memory-Based Reinforcement Learning: Converging with Less Data and Less Real Time,” Machine Learning, 13: pp. 103-130, 1993.
  26. [26] S. Suzuki, T. Tamura, and M. Asada, “Learning from conceptual aliasing caused by direct teaching,” Proc. of the IEEE International Conference on Systems, Man, and Cybernetics, pp. 698-703, 1999.
  27. [27] K. Kawakami, K. Ohkura, and K. Ueda, “Adaptive Role Development in a Homogeneous Connected Robot Group,” Proc. of IEEE International Conference on Systems, Man and Cybernetics, 3: pp. 251-256, 1999.
  28. [28] K. Yamada, K. Ohkura, and K. Ueda, “Reinforcement Learning Control of Cooperative Arm Robots,” Proc. of Artificial Neural Networks in Engineering, ASME Press, pp. 503-508, 2001.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 18, 2024