JACIII Vol.15 No.7 pp. 896-903
doi: 10.20965/jaciii.2011.p0896


Group Behavior Learning in Multi-Agent Systems Based on Social Interaction Among Agents

Kun Zhang*, Yoichiro Maeda**, and Yasutake Takahashi**

*Dept. of System Design Engineering, Graduate School of Engineering, University of Fukui

**Dept. of Human and Artificial Intelligent Systems, Graduate School of Engineering, University of Fukui, 3-9-1 Bunkyo, Fukui 910-8507, Japan

March 5, 2011
May 9, 2011
September 20, 2011
group behavior learning, multi-agent systems, reinforcement learning, state communication, social interaction
Research on multi-agent systems, in which autonomous agents are able to learn cooperative behavior, has been the subject of rising expectations in recent years. We have aimed at the group behavior generation of the multi-agents who have high levels of autonomous learning ability, like that of human beings, through social interaction between agents to acquire cooperative behavior. The sharing of environment states can improve cooperative ability, and the changing state of the environment in the information shared by agents will improve agents’ cooperative ability. On this basis, we use reward redistribution among agents to reinforce group behavior, and we propose a method of constructing a multi-agent system with an autonomous group creation ability. This is able to strengthen the cooperative behavior of the group as social agents.
Cite this article as:
K. Zhang, Y. Maeda, and Y. Takahashi, “Group Behavior Learning in Multi-Agent Systems Based on Social Interaction Among Agents,” J. Adv. Comput. Intell. Intell. Inform., Vol.15 No.7, pp. 896-903, 2011.
Data files:
  1. [1] Y. Maeda, “Evolutionary Simulation for Co-Operative Behavior Learning on Multi-Agent Robots,” J. of Japan Society for Fuzzy Theory and intelligent informatics, Vol.13, No.3, pp. 281-291, 2001 (in Japanese).
  2. [2] M. J. Wooldridge, “An Introduction to MultiAgent Systems,” John Wiley and Sons, Ltd. England, 2002.
  3. [3] T. Matsuura and Y. Maeda, “Deadlock Avoidance of a Multi-Agent Robot Based on a Network of Chaotic Elements,” Advanced Robotics, Vol.13, No.3, pp. 249-251, 1999.
  4. [4] S. Arai, “Multiagent Reinforcement Learning Frameworks : Steps toward Practical Use,” J. of The Japanese Society for Artificial Intelligence, Vol.16, No.4, pp. 476-481, 2001.
  5. [5] M. Tan, “Multi-Agent Reinforcement Learning: Independent vs. Cooperative Agents,” Proc. of Tenth International Conference on Machine Learning, pp. 330-337, 1993.
  6. [6] L. Nunes and E.Oliveira, “Cooperative learning using advice exchange,” Adaptive Agents and Multiagent Systems, Lecture Notes in Computer Science, pp. 33-48, 2003.
  7. [7] M. L. Littman, “Friend-or-foe Q-learning in general-sum games,” Proc. of Eighteenth International Conference on Machine Learning, pp. 322-328, 2001.
  8. [8] A. Greenwald, K. Hall, and R. Serrano, “Correlated Q-learning,” Proc. of Twentieth Int. Conf. on Machine Learning, pp. 242-249, 2003.
  9. [9] M. Bowling, “Convergence and no-regret in multiagent learning,” Proc. of the Annual Conf. on Neural Information Processing Systems, pp. 209-216, 2005.
  10. [10] G. Weiss, “Multiagent Systems: AModern Approach to Distributed Artificial Intelligence,” MIT Press, 1999.
  11. [11] S. Kato and H. Matsuo, “A Theory of Profit Sharing in Dynamic Environment,” PRICAI 2000, LNAI 1886 pp. 115-124, 2000.
  12. [12] C. Claus and C. Boutilier, “The dynamics of reinforcement learning in cooperative multiagent systems,” AAAI/IAAI, pp. 746-752, 1998.
  13. [13] R. Ribeiro, A. P. Borges, and F. Enembreck, “Interaction Models for Multiagent Reinforcement Learning,” Computational Intelligence for Modelling, Control and Automation, Int. Conf., pp. 464-469, 2008.
  14. [14] K. Zhang and Y. Maeda, “Multi Agent Reinforcement Learning Based on Contribution Degree of Individual and Group Evaluation,” The 27TH Annual Conf. of the Robotics Society of Japan, CD-ROM, RSJ2009AC1F1-03, 2009.
  15. [15] K. Zhang, Y. Maeda, and Y. Takahashi, “Group Behavior Learning in Multi-Agent Systems Based on Social Interaction among Agents,” Joint 5th Int. Conf. on Soft Computing and Intelligent Systems and 11th Int. Symposium on advanced Intelligent Systems, TH-B3-1, pp. 193-198, 2010.
  16. [16] D. Barrios-Aranibar and L. M. G. Goncalves, “Learning Coordination in Multi-Agent Systems using Influence Value Reinforcement Learning,” 7th Int. Conf. on Intelligent Systems Design and Applications (ISDA07), pp. 471-478, 2007.

*This site is desgined based on HTML5 and CSS3 for modern browsers, e.g. Chrome, Firefox, Safari, Edge, Opera.

Last updated on May. 10, 2024