single-jc.php

JACIII Vol.21 No.5 pp. 917-929
doi: 10.20965/jaciii.2017.p0917
(2017)

Paper:

Comparison Between Reinforcement Learning Methods with Different Goal Selections in Multi-Agent Cooperation

Fumito Uwano and Keiki Takadama

The University of Electro-Communications
1-5-1 Chofugaoka, Chofu-shi, Tokyo, Japan

Received:
March 30, 2017
Accepted:
July 21, 2017
Published:
September 20, 2017
Keywords:
multi-agent system, reinforcement learning, internal reward, coooperation
Abstract

This study discusses important factors for zero communication, multi-agent cooperation by comparing different modified reinforcement learning methods. The two learning methods used for comparison were assigned different goal selections for multi-agent cooperation tasks. The first method is called Profit Minimizing Reinforcement Learning (PMRL); it forces agents to learn how to reach the farthest goal, and then the agent closest to the goal is directed to the goal. The second method is called Yielding Action Reinforcement Learning (YARL); it forces agents to learn through a Q-learning process, and if the agents have a conflict, the agent that is closest to the goal learns to reach the next closest goal. To compare the two methods, we designed experiments by adjusting the following maze factors: (1) the location of the start point and goal; (2) the number of agents; and (3) the size of maze. The intensive simulations performed on the maze problem for the agent cooperation task revealed that the two methods successfully enabled the agents to exhibit cooperative behavior, even if the size of the maze and the number of agents change. The PMRL mechanism always enables the agents to learn cooperative behavior, whereas the YARL mechanism makes the agents learn cooperative behavior over a small number of learning iterations. In zero communication, multi-agent cooperation, it is important that only agents that have a conflict cooperate with each other.

Cite this article as:
F. Uwano and K. Takadama, “Comparison Between Reinforcement Learning Methods with Different Goal Selections in Multi-Agent Cooperation,” J. Adv. Comput. Intell. Intell. Inform., Vol.21 No.5, pp. 917-929, 2017.
Data files:
References
  1. [1] K.-H. Park, Y.-J. Kim, and J.-H. Kim, “Modular Q-learning Based Multi-Agent Cooperation for Robot Soccer,” Robotics and Autonomous System, pp. 3026-3033, 2015.
  2. [2] M. Camara, O. Bonham-Carter, and J. Jumadinova, “A Multi-agent System with Reinforcement Learning Agents for Biomedical Text Mining,” Proc. of the 6th ACM Conf. on Bioinformatics, Computational Biology and Health Informatics, BCB’15, pp. 634-643, NY, USA, ACM, 2015.
  3. [3] H. Iima and Y. Kuroe, “Swarm Reinforcement Learning Methods Improving Certainty of Learning for a Multi-Robot Formation Problem,” CEC, pp. 3026-3033, May 2015.
  4. [4] Y. Ichikawa and K. Takadama, “Designing Internal Reward of Reinforcement Learning Agents in Multi-step Dilemma Problem,” J. Adv. Comput. Intell. Intell. Inform. (JACIII), Vol.17, No.6, pp. 926-931, 2013.
  5. [5] M. Elidrisi, N. Johnson, M. Gini, and J. Crandall, “Fast Adaptive Learning in Repeated Stochastic Games by Game Abstraction,” AAMAS, pp. 1141-1148, May 2014.
  6. [6] K. J. Prabuchandran, A. N. H. Kumar, and S. Bhatnagar, “Multiagent reinforcement learning for traffic signal control,” In Intelligent Transportation Systems (ITSC), 2014 IEEE 17th Int. Conf. on, pp. 2529-2534, Oct 2014.
  7. [7] M. Tan, “Multi-Agent Reinforcement Learning: Independent vs. Cooperative Agents,” Proc. of the 10th Int. Conf. on Machine Learning, pp. 330-337, Morgan Kaufmann, 1993.
  8. [8] K. V. Karl Tuyls and T. Lenaerts, “A Selection-Mutation Model for Q-learning in Multi-Agent Systems,” Robotics and Autonomous System, pp. 3026-3033, May 2015.
  9. [9] E. M. d. Cote, A. Lazaric, and M. Restelli, “Learning to Cooperate in Multi-Agent Social Dilemmas,” AAMAS, pp. 783-785, May 2006.
  10. [10] F. Uwano and K. Takadama, “Communication-Less Cooperative Q-Learning Agents in Maze Problem,” pp. 453-467, Springer International Publishing, Cham, 2017.
  11. [11] R. S. Sutton and A. G. Barto, “Introduction to Reinforcement Learning,” MIT Press, Cambridge, MA, USA, 1st edition, 1998.
  12. [12] C. J. Watkins, “Learning from Delayed Rewards,” Ph.D. thesis, King’s College, 1989.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on Apr. 18, 2024